UMBC ebiquity

Archive for the 'Programming' Category

Introduction to Microservices Architecture

March 19th, 2016, by Tim Finin, posted in Programming

Introduction to Microservices Architecture

Vladimir Korolev
10:30am 10:00-11:00am, Monday, March 21, 2016 ITE 346

Microservices is a new style of software architecture that relies on separately deployed loosely coupled components. Advantages of this architectural style are faster development cycles, better system resilience, smoother and easier scalability, and less friction with continuous deployment. In his talk Vlad Korolev will give overview of the architecture. Will show the way how to get started. And share personal experiences and gotchas encountered on several microservices based projects.

Lisp in 96 lines of Python: Maxwells equations of software

September 30th, 2010, by Tim Finin, posted in Programming

Peter Norvig has exquisite tastes in programming, is a Lisp guru and is also a great Python hacker. Put that together and what do you get?, an interpreter for the core of the Lisp dialect Scheme in 96 lines of Python. Norvig mentions Alan Kay’s view of Lisp as “Maxwell’s Equations of Software” in a 2004 interview with Stu Feldman:

SF: If nothing else, Lisp was carefully defined in terms of Lisp.

AK: Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!” This is the whole world of programming in a few lines that I can put my hand over.

There is also a companion essay, (How to Write a ((Better) Lisp) Interpreter (in Python)), that shows how to add other features, like macros, quasi-quote, tail recursion optimization and continuations. Sadly, this bloats the code to well over 200 lines.

Kodu: see apple red, move toward quickly

September 21st, 2010, by Tim Finin, posted in Games, Programming

The New York times has a short article, The 8-Year-Old Programmer, on Kodu, a programming environment intended to help young children learn to write programs.

“Kodu, built by a team at Microsoft’s main campus outside Seattle, is a programming environment that runs on an Xbox 360, using the game console’s controller rather than a keyboard. Instead of typing if/then statements in a syntax that must be memorized — as adult programmers do — the student uses the Xbox controller to pop up menus that contain options from which to choose. Kodu itself resembles a video game, with a point-and-click interface instead of the thousand-lines-of-text coding tools used by grown-ups.”

You can also read about Kodu in the Wikipedia article Kodu Game Lab or Kodu project page at Microsoft Research, from which you can also download a free version for the PC.

Kodu is an rule-based, event-driven language with a simple context free grammar that lets you write rules like “see apple red, move toward quickly”.

Kudu takes it’s place in a long history of programming languages developed to teach programming to children, starting with Logo in the late 1960s. None of these have ever truly caught on, although Logo was taught in many elementary schools in the 1980s. As a computer scientist, I believe that being able to write simple programs for one’s own use will eventually be a skill that all educated people will have, just as being able to basic numerical computations and write effective text are today.

A review of the Google Go programming language

November 12th, 2009, by Tim Finin, posted in Google, Programming

Mark Chu-Carroll is a Google software engineer who’s written a long, detailed and informed review of Google’s new programming language Go. It’s worth a read if you are interested in understanding what it’s like as a programming language. Here’s a few points that I took note of.

    “The guys who designed Go were very focused on keeping things as small and simple as possible. When you look at it in contrast to a language like C++, it’s absolutely striking. Go is very small, and very simple. There’s no cruft. No redundancy. Everything has been pared down. But for the most part, they give you what you need. If you want a C-like language with some basic object-oriented features and garbage collection, Go is about as simple as you could realistically hope to get.”

    “The most innovative thing about it is its type system. … It ends up giving you something with the flavor of Python-ish duck typing, but with full type-checking from the compiler.”

    “Go programs compile really astonishingly quickly. When I first tried it, I thought that I had made a mistake building the compiler. It was just too damned fast. I’d never seen anything quite like it.”

    “At the end of the day, what do I think? I like Go, but I don’t love it. If it had generics, it would definitely be my favorite of the C/C++/C#/Java family. It’s got a very elegant simplicity to it which I really like. The interface type system is wonderful. The overall structure of programs and modules is excellent. But it’s got some ugliness. … It’s not going to wipe C++ off the face of the earth. But I think it will establish itself as a solid alternative.”

Go sounds like a language that will help you grow as a computer scientist if you use it. That’s a good enough recommendation for me.

Can a programming language make you happy?

May 11th, 2009, by Tim Finin, posted in Blogging, Programming, Social media, Twitter

We all know that some programming languages are a joy to use and others can be damned painful. Lukas Biewald ran an interesting experiment to gather some data about this in his post, The Programming Language with the Happiest Users.

“Which languages make programmers the happiest? … I decided to do a little market research. I scraped the top 150 most recent tweets on Twitter for the query “X language” where X was one of {COBOL, Ruby, Fortran, Python, Visual Basic, Perl, Java, Haskell, Lisp, C}. Then I asked three people on Amazon Mechanical Turk to verify that the tweet was on the topic. If so, I asked if the tweet seemed positive, negative or neutral. …”

Great idea and a nice use of Amazon Mechanical Turk!

Python: Basic of the future !?!

April 28th, 2009, by Tim Finin, posted in Programming

Guido van Rossum has been blogging about the lack of support for optimizing tail recursion in Python (he’s agin it). His most recent post, Final Words on Tail Calls, includes this paragraph near the end.

‘And here it ends. One other thing I learned is that some in the academic world scornfully refer to Python as “the Basic of the future”. Personally, I rather see that as a badge of honor, and it gives me an opportunity to plug a book of interviews with language designers to which I contributed, side by side with the creators of Basic, C++, Perl, Java, and other academically scorned languages — as well as those of ML and Haskell, I hasten to add. (Apparently the creators of Scheme were too busy arguing whether to say “tail call optimization” or “proper tail recursion.” :-)’

I’ve not yet been able to track down any sources calling Python the ‘Basic of the future’ — all I could find is one person who referred to Java this way and another referring to Javascript. But for a programming language, it is a great slur, or maybe, to take Guido’s stance, a great compliment.

Tutorial: Hadoop on Windows with Eclipse

April 9th, 2009, by Tim Finin, posted in cloud computing, High performance computing, MC2, Multicore Computation Center, Programming, Semantic Web

Hadoop has become one of the most popular frameworks to exploit parallelism on a computing cluster. You don’t actually need access to a cluster to try Hadoop, learn how to use it, and develop code to solve your own problems.

UMBC Ph.D student Vlad Korolev has written an excellent tutorial, Hadoop on Windows with Eclipse, showing how to install and use Hadoop on a single computer running Microsoft Windows. It also covers the Eclipse Hadoop plugin, which enables you to create and run Hadoop projects from Eclipse. In addition to step by step instructions, the tutorial has short videos documenting the process.

If you want to explore Hadoop and are comfortable developing Java programs in Eclipse on a Windows box, this tutorial will get you going. Once you have mastered Hadoop and had developed your first project using it, you can go about finding a cluster to run it on.

Perl/Python Phrasebook

February 5th, 2009, by Tim Finin, posted in Programming

People who’s native language is Perl might find the Perl/Python phrasebook handy. When talking to the Python interpreter, some try hand gestures, typing slowly or using ALL CAPS, but these seldom work and can often annoy or even alarm the interpreter. This phrasebook covers the most common things you need to say to a simple Python system. For example, if you wanted to tell it to read your file as a list of lines, there’s a phrasebook entry that that shows just how to say it.

my $filename = “cooktest1.1-1”;
open my $f, $filename or die “can’t open $filename: $!\n”;
@lines = <$f>;

filename = “cooktest1.1-1”
f = open(filename) # Python has exceptions with somewhat-easy to
# understand error messages. If the file could
# not be opened, it would say “No such file or
# directory: %filename” which is as
# understandable as “can’t open $filename:”
lines = f.readlines()

Many of the entries also contain helpful facts and advice about the customs and social norms of native Python speakers. Not only can this keep you out of trouble, it will deepen your understanding of the colorful and sometimes quaint Python speakers. I hope that the pocket travel version of the phrasebook, suitable for downloading onto an ipod, will be out soon. quick and easy MapReduce for Python

January 2nd, 2009, by Tim Finin, posted in cloud computing, MC2, Programming

The amount of free, interesting, and useful data is growing explosively. Luckily, computer are getting cheaper as we speak, they are all connected with a robust communication infrastructure, and software for analyzing data is better than ever. That’s why everyone is interested in easy to use frameworks like MapReduce for every-day programmers to run their data crunching in parallel. is a very simple MapReduce like system inspired by Ruby’s Starfish. doesn’t aim to meet all your distributed computing needs, but its simple approach is amendable to a large proportion of parallelizable tasks. If your code has a for-loop, there’s a good chance that you can make it distributed with just a few small changes. If you’re already using Python’s map() and reduce() functions, the changes needed are trivial!” is the simple example given in the documentation that is used with to compute the first 100 triangular numbers.

# compute first 100 triangular numbers. Do
# ' server' on server with address IP
# and ' client IP' on each client. Server uses source
# & final, sends tasks to clients, integrates results. Clients
# get tasks from server, use mapfn & reducefn, return results.

source = dict(zip(range(100), range(100)))

def final(key, value):
    print key, value

def mapfn(key, value):
    for i in range(value + 1):
        yield key, i

def reducefn(key, value):
    return sum(value)

Put on all of the machines you want to use. On the machine you will use as a server (with ip address <ip>), also install, and then execute:

     python server &

On each of your clients, run

     python client <ip> &

You can try this out using the same machine to run the server process and one or more client processes, of course.

When the clients register with the server, they will get a copy of and wait for tasks from the server. The server access the data from source and distributed tasks to the clients. These in turn use mapfn and reducefn to complete the tasks, returning the results. The server integrates these and, when all have completed, invokes final, which in this case just prints the answers, and halts. The clients continue to run, waiting for more tasks to do. is not a replacement for more sophisticated frameworks like Hadoop or Disco but if you are working in Python, its KISS approach is a good way to get started with the MapReduce paradigm and might be all you need for a small projects.

(Note: The package has not been updated since April 2008, so it’s status is not clear. But further development would run the risk of making it more complex and would be self-defeating.)

WWGD: Understanding Google’s Technology Stack

December 24th, 2008, by Tim Finin, posted in AI, cloud computing, GENERAL, Google, Programming, Semantic Web, Social media, Web

It’s popular to ask “What Would Google Do” these days — The Google reports over 7,000 results for the phrase. Of course, it’s not just about Google, which we all use as the archetype for a new Web way of building and thinking about information systems. Asking WWGD can be productive, but only if we know how to implement and exploit the insights the answer gives us. This in turn requires us (well, some of us, anyway) to understand the algorithms, techniques, and software technology that Google and other large scale Web-oriented companies use. We need to ask “How Would Google Do It”.

Michael Nielsen has a nice post on using your laptop to compute PageRank for millions of webpages. His posts reviews PageRank and how to compute it and shows a short, but reasonably efficient, Python program that can easily do a graph with a few million nodes. While not sufficient for many applications, like the Web, there are lots of interesting and significant graphs this small Python program can handle — Wikipedia pages, DBLP publications, RDF namespaces, BGP routers, Twitter followers, etc.

The post is part of a series Nielsen is making on the Google Technology Stack including PageRank, MapReduce, BigTable, and GFS. The posts are a byproduct of a series of weekly lectures he’s giving starting earlier this month in Waterloo. Here’s the way that Nielsen describes the series.

“Part of what makes Google such an amazing engine of innovation is their internal technology stack: a set of powerful proprietary technologies that makes it easy for Google developers to generate and process enormous quantities of data. According to a senior Microsoft developer who moved to Google, Googlers work and think at a higher level of abstraction than do developers at many other companies, including Microsoft: “Google uses Bayesian filtering the way Microsoft uses the if statement” (Credit: Joel Spolsky). This series of posts describes some of the technologies that make this high level of abstraction possible.”

Videos of the first two lectures, Introducion to PageRank and Building our PageRank Intuition) are available online. Nielsen illustrates the concepts and algorithms with well-written Python code and provides exercises to help readers master the material as well as “more challenging and often open-ended problems” which he has worked on but not completely solved.

Nielsen was trained as a as a theoretical Physicist but has shifted his attention to “the development of new tools for scientific collaboration and publication”. As far as I can see, he is offering these as free public lectures out of a desire to share his knowledge and also to help (or maybe force) him to deepen his own understanding of the topics and develop better ways of explaining them. In both cases, it an admirable and inspiring example for us all and appropriate for the holiday season. Merry Christmas!

UMBC to offer special course in parallel programming

December 9th, 2008, by Tim Finin, posted in cloud computing, High performance computing, MC2, Multicore Computation Center, Programming

There’s a very interesting late addition to UMBC’s spring schedule — CMSC 491/691A, a special topics class on parallel programming. Programming multi-core and cell-based processors is likely to be an important skill in the coming years, especially for systems that require high performance such as those involving scientific computing, graphics and interactive games.

The class will meet Tu/Thr from 7:00pm to 8:15pm in the “Game Lab” in ECS 005A and will be taught by research professors John Dorband and Shujia Zhou. Both are very experienced in high-performance and parallel programming. Professor Dorband helped to design and build the first Beowulf cluster computer in the mid 1990s when he worked at the NASA’s Goddard Space Flight Center. Shujia Zhou has worked at Northrop Grumman and NASA/Goddard on a wide range of projects using high-performance and parallel computing for climate modeling and simulation.

CMSC 491/691a Special Topics in Computer Science:
Introduction to parallel computing emphasizing the
use of the IBM Cell B.E.

3 credits. Grade Method: REG/P-F/AUD Course meets in
ENG 005A. Prerequisites: CMSC 345 and CMSC 313 or
permission of instructor.

[7735/7736] 0101 TuTh 7:00pm- 8:15pm

Measuring programming language popularity

December 4th, 2008, by Tim Finin, posted in Programming

What programming language skills are most in demand? Which languages are hot and which ones are in decline? Is COBOL on the endangered language list? Such questions are of interest to all of us in the IT field and maybe especially to students preparing for careers.

TIOBE programming language trends November 2008The TIOBE Programming Community Index tracks the popularity of popularity of 150 programming languages, from ABC to XSLT, based on the number of hits for a simple query (“ programming”) run against five web search engines. The top ten in their November 2008 index are, in order: Java, C, C++, Basic, PHP, Python, C#, Delphi, Perl and JavaScript.

They also provide trend data since 2001 for the top twenty languages (e.g., Logo) and an composite overview of the top ten. Finally, they provide some aggregate information by paradigm and type regimen as well as some analysis and observations.

“There are number of interesting changes this month. First of all Perl is at an all-time low, whereas Delphi is still on the rise. Delphi is competing for TIOBE’s “Language of the Year 2008 Award” together with C++ and Python. Another interesting trend concerns visual programming languages. These languages are becoming really popular. Most of them have an educational nature for new programmers. Logo, certainly the oldest visual programming language, enters the top 20 this month. The new StarLogo TNG implementation from MIT is probably one of the major causes of this success. Alice, developed by Carnegie Mellon, is new at position 34, whereas Lego Mindstorms’ programming language NXT-G is at position 37. In the tables below some long term trends are listed about categories of languages. The object-oriented paradigm is at an all time high with 57.9%. The popularity of dynamically typed languages seems to be stabilizing (see trend diagram below).”

This is a good resource, although their methodology only measures some aspects of language popularity and seems to include variations due to changes in the underlying search engines on which they rely. In the past when I have taught our undergraduate programming languages course, I used to estimate the demand for language-specific programming skills by running a set of queries against For students, knowing the current demand for skills is obviously of special interest.

You are currently browsing the archives for the Programming category.

  Home | Archive | Login | Feed