Scripting Manila and iTunes

New scripts today. First, a version of the iTunes script I wrote a few days ago that posts the currently playing item directly to a Manila website as a news item. Second, some modules that contain functions for making SOAP calls and calling Manila RPC interfaces.

As a programmer, I was big into reuse of code through object orientation. It bugged me for a long time that I couldn’t figure out how to make that work in AppleScript. Today I’ve got one version working. It’s not very clean, because it requires a lot of drag and drop installation, but it’s getting there. The other good thing is that it will cut down on the amount of pain in writing and deploying these scripts because it separates a lot of the Manila “glue” code from the parts of the scripts that actually do things.

All the scripts can be downloaded from my scripts page.

One note about iTunes2Manila–if your site is hosted on like mine is, you may get some timeout messages. I’m still playing with avoiding these, but (as you can tell from my home page), just because you get a timeout doesn’t mean that the news item didn’t get posted.

Scripting iTunes

A new script today. No Manila, but it’s a productivity enhancement for my blogging. It grabs the currently playing song from iTunes and drops it into TextEdit. The script is called iTunes2TextEdit, and it’s available from my homepage.

You need iTunes 2 and MacOS X to run this script. It also works pretty well with TextEdit2Blog, my other script for blogging from TextEdit.

I want to enhance the script to dump in links with the song information automatically, but other than linking to a Google search, I’m not sure where to point the lookups. The Ultimate Band List and IUMA are both hard to navigate from a search standpoint. Why is there no IMDB for information about music?

Currently playing song: “Becuz” by Sonic Youth on Washing Machine.

Speaking in Tongues and other stuff

Update 12:15 PM: I’m a little behind in pointing to this, but I was ahead in saying it was a bad idea. When I visited Intel in January 2001, a few of us asked why Intel was in the business of making consumer MP3 players. The answer we got? “Well, we’re a really large supplier of memory chips, and this is a critical application for them.” Unsurprisingly, Intel has now announced it would phase out this product line. No “I told you so’s” from me. 🙂

Trying to be productive this morning. It’s hard. I picked up the Episode 1 DVD last night and I want nothing more than to go home and fall asleep watching it.

Some random links: Dave is the recipient of the top Wired Rave Award, the Tech Renegade Award, for his work on SOAP. I won’t argue–in terms of my blog’s hit count alone, Dave’s certainly been the most influential person around. Plus I’m working on a major project with MIT Sloan‘s Center for E-Business around the industry in web services that SOAP helped to start.

The white powder that was found in an envelope by an MIT lecturer in Foreign Languages and Literature tested negative for anthrax.

If language is a virus, is it contagious?

The Tin Man has a good comments string running from Wednesday’s post about journalism. Most of them are about his use of the word “y’all.”

Aside: I’ve been gathering unusual words and expressions from the North Carolina side of my family. I never thought much about the colorful language that they used until my undergrad years. Then I read in the excellent liner notes to the Robert Johnson boxed set that Johnson’s term friend-boy in “Cross Road Blues” was a typical Mississippi Delta expression meaning simply friend. “Gee, I thought, “my uncle says that all the time.” I came to realize that my family’s language placed them solidly in the unique linguistic history of the South.

Some other words and phrases:

(pron. “peert”) for “pretty”
It was so good, my tongue like to beat my brains out.
(said about food)
He’s a good businessman. If you shake hands with him, you better
  • count your fingers.
  • Put your money in your mouth and sew your tongue up tight.
[v. intransitive] – to do nothing constructive. Generally used as “to pottymule around.” See also “blogging.”

The dot-com that broke my heart

After all the things I’ve written about using wireless access in public places, I was really sad to see that Mobilestar is in danger of closing.

I was talking this morning to some classmates, working on a project that was trying to identify customers’ perceptions of Zipcar. One of the perceptions was essentially “It sounds like a great service, but I’m skeptical.” In trying to articulate what was behind this perception, I said, “The customer doesn’t want his heart broken by another dot-com.” I hadn’t thought of it this way before, but that’s a lot of what I’m seeing in business school now. Lots of cynicism, lots of verbal defensiveness. Wall Street is the same way–there’s no rational reason that Akamai, a great company with a great business plan and great prospects, is trading at $4.26 a share. We loved dot-coms and now so many of them are gone. It’s like a grieving process.

More thoughts shortly on the history of web services. I just need to find a better place to write.

History of Web Services, Part II

This is part two of my informal, inaccurate History of Web Services; part one was posted on Wednesday.

CORBA was probably the first credible enterprise scale mechanism to allow processes (applications) on different computers to talk to each other. Why hasn’t it taken over the world?

Characteristics of CORBA

CORBA, according to its caretakers the Object Management Group, was designed to provide “interoperability between applications on different machines in heterogeneous distributed environments and seamlessly [interconnect] multiple object systems.” The calling system did not have to be aware of the hosted service’s location, operating system, programming language, or anything else about the service except for the interface (in the sense of an API–the defined way to talk to the service).

Sounds great in theory. You could theoretically make just about anything a CORBA-compliant object–legacy databases, Perl scripts, Microsoft Visual Basic applications–and have it all work together.

So what’s wrong in practice? The CORBA spec requires an Object Request Broker (the ORB in CORBA) to reside at a known location that keeps track of where all the services are. And that’s potentially a huge performance bottleneck. CORBA isn’t dead yet, but if you were to write its eulogy performance would be the cause of death.

Microsoft’s first answer: DCOM

Microsoft in the mid-nineties announced that it had a better answer to the distributed computing problem than CORBA. It was based on its existing Component Object Model (COM) that allowed interapplication communication inside a Windows machine, but it now had features in the plumbing to allow COM calls to work across a network. DCOM (Distributed COM) was supposed to make calling remote application services as easy as making OLE calls in Windows.

No, seriously, that’s what they said. The sad reality, at least from the point of view of thisex-Windows programmer of enterprise applications, was that making OLE calls in Windows was never particularly easy–or reliable. Different computers might, or might not, have functional OLE subsystems. The application being called might respond so slowly that the call would time out–even if both were on the same machine. You can only imagine how well distributing that same architecture worked.

Any distributed code you want, as long as it’s Java

The Java community made their own play at a distributed computing paradigm, called Enterprise Java Beans. At least one critic has called EJB the only “detailed, practical” specification of CORBA services. In addition to the core CORBA capabilities, EJB offers security, persistence, concurrency, load sharing, and other “value added” features.

From a programmer’s perspective, EJB sounds like (and is) a great technology. The issues with EJB are more subtle. First, you have to use Java to use EJB. This isn’t necessarily a problem, as Java is a pretty universally available programming language, but it places more constraints on your application than CORBA per se. Second, your application has to be pretty tightly coupled to the particular beans that provide its services. This makes operation in intermittently connected environments dicey at best.

Web Services

So what does the Web Service paradigm offer that CORBA, DCOM, and EJB couldn’t? That, it would appear, is the $64,000,000 question. The new wrinkles are XML, HTTP, and independence from language, platform, and broker.

No Broker: Web services can (and in most cases, do) run without a central broker. This eliminates one perk of CORBA, not having to know where the service is, but I’d argue that’s a questionable benefit compared to the performance hit imposed by going through a broker for all calls.

XML: Web services generally presuppose that the fundamental language of interchange for data is XML. This assumption doesn’t extend across any of the other approaches, where XML support, if it exists at all, exists as a bolt-on.

HTTP: Web services generally use HTTP as the transport protocol. This is important because it enables making web services calls through firewalls. However, as HTTP isn’t a robust transaction protocol, this may impose performance hits.

Platform and language independence: Unlike EJB, you don’t have to write Java to get in the game for web services (at least, most implementations). Unlike DCOM, there’s no platform requirements.

SOAP Family History

I was talking to some folks this weekend about my work experience, and mentioned that my programming experience had been in client server systems. “Client server!” they said–“boy, I haven’t heard that expression in a long time.” Feeling instantly old.

So what’s the connection between client-server and this XML-RPC/SOAP/scripting/web services thing I keep writing about? Kind of an indirect parentage, actually. It’s useful to go back a few generations to get the background on where web services came from, technologically speaking.

History Lesson

A long time ago, computer programs were monolithic, in the technical sense of that term. They were one big chunk of code that could be accessed only one way and only ran on one machine. This was because the machines were usually so big and expensive that there weren’t many machines around, and thus there was no value in networking them.

Somewhere along the way, things changed. Pretty soon, people were interested in getting systems to talk to each other. And software had started getting more structured in the meantime. On the one hand, you had application programming interfaces, or APIs: formally defined ways to access the functionality offered by a program, whether an application or an operating system. On the other hand, you had object orientation–the concept that a given chunk of software should protect the data that it accessed and offer well defined methods to read and change that data.

Why was this important? APIs gave software developers a clearly defined path to add functionality to applications or to write applications for platforms. (By the way, the Mac was the first mainstream personal OS with a documented, rich API for writing applications.) And object oriented meant that you didn’t have to worry about some other chunk of the application randomly changing your data, making it easier for large numbers of people to work together on a software project. But we’re still talking about monolithic applications running on only one machine. The network, if it’s there, is slow and unreliable for most users, or else only connects largely heterogenous systems inside your own company.

Add Network, Stir Vigorously

Shift gears for a second. It’s the eighties. You have a big transaction clearing system for a bank. You want to set up a network of machines to allow people to withdraw cash from their accounts, at places and times that are convenient for you. But you’ve got a problem. Even if you wire up ATMs in all the states where you have branches, you still haven’t covered the people who travel out of the area where your bank exists. How do you get an information system to allow people to get their money regardless of whose machine they’re using?

What if the software objects could talk to each other over the network? What would need to happen to make that possible? Well, one program would need to know how to talk to another and what to do with what the other said. (Sounds like an API.) And you’d want to make sure that only your bank’s systems actually made changes to the data–other banks could make requests, but not actually directly change the numbers in your customer’s accounts–so you could assure your customers’ security and privacy. (Sounds like object oriented code.)

Expose the API to get to a software object and make it accessible over a network. That’s the story behind CORBA (Common Object Request Broker Architecture). Behind DCOM (Distributed Component Object Model). And, to a very simplified degree, behind web services.

So if we had CORBA fifteen years ago, why web services now? That’s, as they say, a whole ‘nother story–one I’ll try to write about tomorrow.

It’s a Big Scripting Party

Yesterday’s piece on using AppleScript to update a Manila blog interested a lot of people, thanks largely to Dave‘s link to my page. A typical story on my site might accrue 10 hits the first day it’s posted. This one garnered 559. Which leaves open the question: by writing that little script and telling Dave about it, was I just shamelessly whoring for hits? Hopefully a few people downloaded the script and found it worthwhile.

I’d like to issue a public invitation to all the people who are interested in using Apple’s new RPC capabilities with Manila. Let’s have a discussion about what would be the most valuable scripts to write and what would add the most value. I’ll kick it off: I think that (a) automatic spellchecking prior to posting and (b) ability to post image files directly from the Finder to my blog would be great things to have. What do you think?

I’d also like to point out that there are people doing similar things for Blogger. Following my referer links, I just came across this script at Web Entourage. It’s smarter than my script–it uses the selected text from any application. This is cool. I’m learning more about Applescript (I’ll be the first to admit I’m pretty illiterate in it) and about the other people out there who are doing this stuff.

I like the WebEntourage web page better than the page that shows off my script, since he links clearly to the API and to Blogger. I wonder if there could be some way to pull out the links from a Manila message automatically and format them for display somehow, like what Slashdot does.

In non-scripting news…

Today is the fourth anniversary of my wedding to Lisa. We got a lovely e-card yesterday from our dear friend Larry Mueller. I sang with Larry in college and he read at our wedding.

Getting the email from him, I realized it’s been far too long since I spoke with a few of my friends like Larry. Distance is pretty hard to conquer when you’re a student. I think it’s ironic that as my ability to write for my blog has improved, my letter writing skills have diminished.

A long day today. Lots of coursework. I have to keep reminding myself that corporate finance is worth all the trouble.

Some smiles

Thank God the Onion is back. I’ve missed them the last two weeks. Highly recommended: U.S. Vows to Defeat Whoever It Is We’re At War With: “‘The United States is preparing to strike, directly and decisively, against you, whoever you are, just as soon as we have a rough idea of your identity and a reasonably decent estimate as to where your base is located.’ Added Bush: ‘That is, assuming you have a base.'”

Funky Mouse Jive

Another thing making me smile: my browser. For the last five months, my browser of choice has been Mozilla, the open source descendant of Netscape. There are lots of good reasons to use it: better standards support than Internet Explorer, never any threat of smart tags, open bug reporting, daily improvements. On my Mac OS X laptop it’s much faster than Internet Explorer too. But today it’s giving me two pretty revolutionary user interface functions: a tabbed browser window, allowing me to switch back and forth between multiple browser sessions in the same window, and mouse gesture navigation. Gesture navigation uses easily remembered mouse gestures to perform browser navigation, like drag left to go back in the history, drag up then down to reload the page, mouse up and then right to maximize the window… Less overall mouse movement than going to the toolbar. Pretty darn cool.

Where Everyone (Wants to) Know Your Name

A few months ago, I wrote about single sign-in and why AOL and Microsoft are both trying to be the Internet’s major providers of it. Yesterday, Sun announced they were jumping on the bandwagon with digital identity services. It’s surprising that it took Sun as long as it did to come to the party, given their ambitions as an Internet platform company. Why did they wait so long? What’s so important about single sign-on?

When I was a programmer, I used to hate one thing about debugging my application: Every time I wanted to run it to test my code fixes, I had to type in my user name and password. We couldn’t do anything nifty at that point like tying it to some automated central login — the military still didn’t fully trust NT security, and half our user base was running on Windows 95 or 98, which weren’t designed to be bulletproof when it came to authenticating users.

So I did what any self respecting developer would do when he got a loud complaint from his user (me): I hacked my local code base so that it automatically supplied my username and password when I ran. Single sign-in, for sure–I was the only one who could sign-in.

There’s definitely a user benefit to only having to log on once to access information, even when you’re talking about logging into your computer and only one other system. But what about the web? Every merchant, chat room, vendor site, newspaper, whatever site in existence wants you to sign in somewhere. I calculated the other day that it takes visits to four web sites to pay our monthly bills on line. One of those sites, our bank, consolidates information from at least ten other billing agents behind its “single sign-in.” By doing that consolidation, our bank has reduced the number of user name and passwords that I have to remember from fourteen to four. Do they have my business for a long time? You bet.

Microsoft has announced part of its business strategy behind the .NET initiative. While it will be working with service providers across the Internet to deliver tons of value to the end customer, it will be the customer, as value recipient, who will pay for the service. This is probably good, since it avoids all the known bad business models (advertising supported services, VC funded free software, etc.) that have caused so many dot-coms to implode. But how does Microsoft convince customers that paying for these services is worth it? I think single sign-in is one of the benefits they’re betting that customers will pay for. And I think Sun just woke up and realized that a business shift is occurring, and they are about to miss it.

Virii and Ecosystem Health

I rambled a little yesterday. It’s like that sometimes before 8 am. One of the points I forgot to make in my comparison of IM wars and single sign-on wars is that sometimes keeping a diverse ecosystem is critical to the ecosystem’s health. In English? Two or more strong players in the market are better than one. Common sense, really, except in the computer field, which has for the last ten years had a strong whiff of winner take all about it.

Economists talk about network externalities driving the explosive growth of Windows and of Microsoft Office. A “network externality” is an effect that makes a particular good or service valuable the more other people are using it–at least that’s the common definition.

But I think most people forget that occasionally network externalities can be negative as well as positive. And that’s how we get things like SirCam and other Outlook virii. The Outlook virii bother me because of what they say about email clients. What network externalities can be gained on a large scale from everyone being on just one email client? Within an organization’s firewall, sure, you get value by adding extra components like calendaring (and don’t get me wrong–for corporate email, this is a killer app). But I’m aware of very few companies who extend access to those apps to people outside the firewall. So why does everyone run the same email client?

The real externality in Internet mail is the mail format itself, good old RFC 822. It’s a transfer protocol, just like SOAP. Any client that can speak it can participate in the discussion. If you believe in the logic of network externality, in the email realm there’s no reason that Outlook or any mail client should have the overwhelming market share that it does.

I used to feel fairly secure about email virii. I was the only Mac user in the MBA program at MIT’s Sloan School of Management, and we used Eudora as the school’s email application. There were always a few people who persisted in using Outlook in spite of the fact that there wasn’t a calendar server anywhere in the school. They spread their fair share of virii, but I was never infected. My diversity protected me. Unfortunately we’re changing over in the fall, largely because of a lack of a unified calendar feature for Outlook. I’m grimly looking forward to discussions with my IT-savvy but Outlook-bigot classmates after the fourth or fifth email virus comes around.

More or Less Back to Normal

Eudora Welty has died at age 92…

Things are just about back to normal here. Starting early this Friday, I got a bunch of hits (2000 pageviews and counting, up from a normal baseline of 30-40 for a really good article) to the site from MacInTouch and Scripting News readers, looking at my article on SOAP and XML-RPC in Mac OS X 10.1. I think that I scared most of them off from returning with Friday’s piece, though.

For the record, this site isn’t about the Mac, or travelling, or Seattle. It’s more about me and what I’m going through. So you can expect to see me ramble on about a number of topics at any given time. I do recognize, however, that people reading the site may want a little more structure than that. So I’ve added some links in the Navigation area that pull together stories on topics about which I tend to write more frequently. If you’re interested in this site just for one of those topics, you can now bookmark the topic page of your choice. If you don’t mind reading all my chaos, by all means come back to the home page.

Seattle Update

This Saturday I got really sore. I probably should have learned my lesson after last weekend’s exercise in pain management. I was back in Lake Union again on Saturday, this time kayaking with the other MBA interns at my company. We went a little further in the kayaks than I did before in the rowboat. If you look at this map (provided by the Moss Bay Rowing and Kayaking Center, who rented us the canoes), we started at the point marked with the cross and paddled around to a point near the Museum of History and Industry, then back. It was an overcast day, so I was spared the utter blistering sunburn I should have had after three hours on the open water without sunscreen. But I did really pull something in my right arm, so that even today I’m finding it hard to lift anything heavy, move my fingers, or apply a lot of pressure with my hand.

Sunday was a little more painless: I attended Bite of Seattle with a few friends. I was a little apprehensive, but it turned out to be a much better event than similar ones I’ve attended in Washington, DC and other places. The food was much better (although I still got a little sick on something I ate) and the crowds were less crazy. I did get sunburned on Sunday, but not too badly.