Wednesday, December 29, 2004

Alan Kay on the State of Computing Today


Interview with Alan Kay
Stuart Feldman, ACM Queue 2:9:20-30, Dec 2004/Jan 2005

Alan is well known as the inventor of Smalltalk, not to mention leading his lab at Xerox Parc which invented the mouse, the graphical user interface, the ethernet, the laser printer, and virtually all the stuff that forms the basis of modern personal computing. In addition to winning the Turing Award, Kay recently received the Draper Prize from the National Academy of Engineering and Kyoto Prize in Advanced Technology.

Alan Kay: For a Scientific American article 20 years ago, I came up with a facetious sunspot theory, just noting that there's a major language or two every 10 1/2 years, and in between those periods are what you might call hybrid languages...

Perhaps it was commercialization in the 1980's that that killed off the next expected new thing. Our plan and our hope was that the next generation of kids would come along and do something better than Smalltalk around 1984 or so... One could actually argue--as I sometimes do--that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.

You could think of it as putting a low-pass filter on some of the good ideas from the '60s and '70s, as computing spread out so much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happended when television come on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare...

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

Tuesday, December 28, 2004

Are Web Services Distributed Objects?


David Orchard wrote an interesting note comparing web services to distributed objects on his blog. Check out the detail on his site or at webservices.org. His summary is:

I've shown that some pretty important technical facets - extensibility, versioning, state, verb re-usability, sychronicity - where Web services aren't that different from distributed objects. There is an issue that the bulk of Web services can't take advantage of Web infrastructure. Sure, Web services uses XML with Namespaces and that buys a lot for interoperability. The knowledge of the network is an important differentiator between Web services and distributed objects.

The challenge for anybody to prove that Web services = or != distributed objects is to quantify the differences or similarities in actual architecture terms - like identity, state, lifecycle, verbs, synch/asynch, message exchange patterns. Show how Web services are more or less brittle than distributed object technology at a technical level. Not just "Oh, Web services are SOA and distributed objects are objects and we all know services are better than objects." That's yucky thinking.

Web services are pretty close to distributed objects at a technical level but Web services != distributed objects at political level because we roughly have all the big vendors working together. It would be nice if the distributed object folks wanted to try some new approaches (hey, URIs!) but we'll get Web services to work technically and politically because the technical differences are the important ones (remote knowledge) and the politics are better.

Wednesday, December 22, 2004

Search Wars: Google Options



What’s Next for Google
By Charles H. Ferguson, MIT Technology Review, January 2005

For Eric Schmidt, Google’s CEO, 2004 was a very good year. His firm led the search industry, the fastest-growing major sector in technology; it went public, raising $1.67 billion; its stock price soared; and its revenues more than doubled, to $3 billion. But as the search market ripens into something worthy of Microsoft’s attention, those familiar with the software business have been wondering whether Google, apparently triumphant, is in fact headed off the cliff.

I’ve seen it happen before. In September 1995, I had breakfast with Jim Barksdale, then CEO of Netscape Communications, at Il Fornaio in Palo Alto, CA, a restaurant popular with Silicon Valley dealmakers. Netscape had gone public a few months earlier, and Netscape Navigator dominated the browser market. Vermeer Technologies, the company that Randy Forgaard and I had founded 18 months earlier, had just announced the release of FrontPage, a Windows application that let people develop their own websites. Netscape and Microsoft were both preparing to develop competing products. Our choice was to stay independent and die or sell the company to one of them.

At breakfast, and repeatedly over the following months, I tried to persuade Barksdale to take Microsoft seriously. I argued that if it was to survive, Netscape needed to imitate Microsoft’s strategy: the creation and control of proprietary industry standards. Serenely, Barksdale explained that Netscape actually invited Microsoft to imitate its products, because they would never catch up. The Internet, he said, rewarded openness and nonproprietary standards. When I heard that, I realized that despite my reservations about the monopolist in Redmond, WA, I had little choice. Four months later, I sold my company to Microsoft for $130 million in Microsoft stock. Four years later, Netscape was effectively dead, while Microsoft’s stock had quadrupled.

Google now faces choices as fundamental as those Netscape faced in 1995.

Tuesday, December 14, 2004

Scrum: Subsumption Architecture and Emergent Behavior


PicBot exhibiting emergent behavior

Yesterday's posting on the birth of Scrum generated some questions on Rodney Brooks' subsumption architecture. One could argue that Agile processes emerge architectures by building the simplest possible thing and evolving into more complex behavior by implementing close connection of developers to code, pre-organized patterns of behavior, simple refactoring techniques, no central control, no shared representation, and short daily meetings with face to face communications.

This sounds remarkably similar to the University of Michigan AI Lab Cliff Notes on Rodney Brooks:

Brooks reasons that the Artificial Intelligence community need not attempt to build "human level" intelligence into machines directly from scratch. Citing evolution as an example, he claims that we can first create simpler intelligences, and gradually build on the lessons learned from these to work our way up to move complex behaviors.

Brooks' Subsumption architecture was designed to provide all the functionality displayed by lower level life forms, namely insects. Using a common house fly as an example, Brooks claims that creatures at this level of intelligence have attributes such as close connection of sensors to actuators, pre-wired patterns of behavior, simple navigation techniques, and are "almost characterizable as deterministic machines". The Subsumption architecture provides these capabilities through the use of a combination of simple machines with no central control, no shared representation, slow switching rates and low bandwidth communication.

Monday, December 13, 2004

Nativity Scene: How Scrum was Born


IROBOT's Genghis Khan now in the Smithsonian

Recently, I was asked to write an article on the birth of Scrum by the Cutter Consortium. The first Scrum incubated at Easel Corporation in 1993 and was influenced by the birth of IROBOT's first robot, Genghis Khan.

Sutherland, Jeff (2004) Agile Development: Lessons Learned from the First Scrum.

Of historical interest is that I joined Easel Corporation in 1993 as VP of Object Technology after spending 4 years as President of Object Databases, a startup surrounded by the MIT campus in a building which housed some of the first successful AI companies.

My mind was steeped in artificial intelligence, neural networks, and artificial life. If you read most of the resources on Luis Rocha's page on Evolutionary Sytems and Artificial Life you can generate the same mind set.

I leased some of my space to a robotics professor at MIT, Rodney Brooks, for a company now know as IROBOT Corporation. Brooks was promoting his subsumption architecture where a bunch of independent dumb things were harnessed together so that feedback interactions made them smart, and sensors allowed them to use reality as an external database, rather than having an internal datastore.

Prof. Brooks viewed the old AI model of trying to create an internal model of reality and computing off that simulation as a failed AI strategy that had never worked and would never work. You cannot make a plan of reality because there are too many datapoints, too many interactions, and too many unforseen side effects. This is most obviously true when you launch an autonomous robot into an unknown environment.

The woman I believe will one day be known as the primieval robot mother by future intelligent robots was also working in my offices giving these robots what looked like emotional behavior. Conflicting lower level goals were harnessed to produce higher goal seeking behavior. The robots were running in and around my desk during my daily work. I asked IROBOT to bring Ghenghis Khan to an adult education course that I was running with my wife (the minister of a local Unitarian Church) where they laid the robot on the floor with eight or more dangling legs flopping loosely. Each leg segment had a microprocessor and their were multiple processors on its spine and so forth. They inserted a blank neural network chip into a side panel and turned it on.

The robot begain flailing like a infant, then started wobbling and rolling upright, staggered until it could move forward, and then walked drunkenly across the room like a toddler. It was so humanlike in its response that it evoked the "Oh, isn't it cute!" response in all the women in the room. We had just watched the first robot learn how to walk.

That demo forever changed the way the people in that room thought about robots, people, and life even though most of them knew little about software or computers.

This concept of a harness to help coordinate independent processors via feedback loops, while having the feedback be reality-based from real data coming from the environment is central to human groups achieving higher level behavior than any individual can achieve on their own. Maximizing communication of essential information between group members actually powers up this higher level behavior.

Around the same time, a seminal paper was published out of the Santa Fe Insitute mathematically demonstrating that evolution proceeds most quickly as a system is made flexible to the edge of chaos. This demonstrated that confusion and struggle was essential to emerging peak performance (of people, or software architectures, both of which are journeys though an evolutionary design space).

On this fertile ground, the Takeuchi and Nonaka paper in Harvard Business Review provided a name, a metaphor, and a proof point for product development, the Coplien paper on the Borland Quattro Product kicked the team into daily meetings, and daily meetings combined with time boxing and reality based input (real software that works) started the process working. The team kicked into a hyperproductive state (only after daily meetings started), and Scrum was born.