Using an AirPort Express with FiOS

As I mentioned yesterday, there were a few unfinished items left after the FiOS installation yesterday. I got two of the items taken care of this morning, but I was a little disturbed at what I had to do to make things work.

After the installation was complete on Sunday, I connected to the administrative web page of the Actiontec router that Verizon had provided (and which is required with the Verizon TV package). I reconfigured the router to take over the network name (SSID) that I had been using on my AirPort Extreme, changed the security to WPA2, and set the passphrase to the one I had been using previously. Our laptops and my iPhone picked up the change, but my AirPort Express units (which provide wireless printer support and AirTunes) didn’t. They’re first generation AirPort Express, and do 802.11G and 802.11b only.

After some pulling my hair out this morning, I found a thread on the Apple support message boards that suggested that the original AirPort Express was incompatible with the Actiontec version of WPA2. I changed the Verizon router to use regular WPA and told the AirPort Express to use WPA/WPA2 for authentication. After rebooting, I finally got a good connection (green light) with the Express. My second Express didn’t need any reconfiguration–I simply unplugged it and plugged it back in, and it worked.

So there’s that. What’s left is getting my hard drive, with all my music, back on the network. I may have to run an Ethernet drop into the living room over Christmas. Or try one of the tricks for supplanting the Actiontec for wireless.

(It’s more than a little annoying, btw, that I had to use regular WPA instead of WPA2. WPA2 is a much more secure protocol and WPA has been cracked.)

Google Chrome 1.0 (.154.36)

Well, that was fast. Google Chrome went from new to 1.0 in about 100 days:


But is it ready? And why so soon?


I expected Google to add more features over time, since the merely architectural improvements of the browser didn’t seem to meet the critical differentiator threshold to justify launching a new browser. But that didn’t really happen. And in fact, Google seems to be launching Chrome with some rough edges intact. Check out this snippet of the WordPress 2.7 login screen (right).See those black edges around the box? That’s a rendering bug in Chrome’s version of WebKit. (The black corners aren’t there in Safari.)

So: Google is rushing a new browser that they “accidentally” leaked just 100 days ago, a browser that has significant speed but demonstrable rendering flaws, into an already crowded market. Why? And why launch two days after previewing the Google Native Code plug-in, a web technology that seems a far more significant leap forward?

My guess: they’re scared of having their thunder stolen, maybe by Firefox. The new Mozilla JavaScript engine, TraceMonkey, appears to be running neck-and-neck with Google’s V8. And when the major feature in your browser is speed, you don’t want to risk being merely as good as your better established competitor. So maybe releasing Chrome ahead of Firefox 3.1 (which still has no release date, and at least one more beta to go) was simply a defensive move to make sure they aren’t competitively dead before they launch.

Obscure HTML element of the day: dfn

I’ve had an opportunity to do a little static HTML + CSS work recently, and have had a few educational and reeducational moments about the joys of doing basic web development–all the stuff that a good CMS like WordPress hides from you.

Today’s educational moment was a question of footnote treatments. My application had footnotes at the very bottom of its page, with nothing beneath them, and did inpage links to the footnotes. But it was linking to the footnotes from a part of the text that was close to the bottom of the page, so the footnote was already visible. As a result, when a user clicked a link to get to the footnotes, nothing happened–the footnote was already there, and there was no more page to scroll up.

There are ways around this. Daring Fireball has a lot of empty space on its pages below its footnotes, meaning that the page can scroll to place the footnote at the top. But the bug got me thinking again about why I was doing the footnotes and how I could change the user experience. What if I moved the footnote text–which was generally some sort of quick definition–into a mouseover? I knew I could do it with acronym, but the text I was footnoting wasn’t an acronym so it wouldn’t have been semantically correct. Was there a semantic way to mark up the word or phrase being footnoted so that when moused over, a definition would show?

Enter dfn. See what that does? The dfn tag is basically tailor made for what I wanted to do, and is even reasonably well supported. FF3 and IE7 even automatically italicize the term.

I made one more change to my stylesheet to make it really explicit that more information was there for the mouseover, and applied the same rule that I had for abbreviations:

dfn {
   border-bottom: 1px dotted #333;
   cursor: help;

With that, the user got a dotted underline on the term, and a help cursor when they moused over.

I would probably make one more change if the application was expected to be printed, which would be to introduce some styles or JavaScript in the print stylesheet that would do an inline expansion of the definition. But for what I needed to do, dfn worked pretty well by itself. Yay obscure HTML elements!

Remix culture: NASA’s bootleg Snoopy from 1969

I had read about NASA’s use of Snoopy and the Peanuts characters as unofficial mascots for Apollo 10 (it was well documented in Charlie Brown and Charlie Schulz, which sat on my Pop-Pop’s bookshelf alongside the Peanuts Treasury), but don’t remember seeing this. Courtesy Google Image Search and the LIFE archives:

As good an argument for the Commons as I’ve ever seen. The irony is, of course, that it sits in Google Images with no reasonable licensing in place. Even this bootleg image is claimed as copyright LIFE magazine.

Google LIFE archive: where’s the usage rights?

I’m impressed by the new LIFE photo archive at Google Images–it’s a truly significant work of digital content. But it’s missing one important thing: a usage policy. The images are marked (c) Time Inc., so it’s clear they aren’t public domain. But is there any way to purchase usage rights? The only reuse provision seems to be a framed print purchase.

Compare it to what Flickr does with the images in its commons, or anywhere else for that matter–a clear licensing agreement, selectable by the poster, that explains how images can be used. The LIFE archive may be visually striking, but it would be much more valuable if the images could have a life beyond Google’s servers.

Ubiquity memory issues on Firefox

I may have to stop using Ubiquity for a while. I’ve used it exclusively because it, plus the share-on-delicious script, provides a great keyboard-only way to tag web pages for Delicious, simply by ctrl-space and typing share Delicious bookmark description tagged delicious tags entitled title“.

Alas, there are definite memory issues with Ubiquity or with the script. I currently have three tabs open in Firefox and the memory is more or less stable at 112,988K. If I invoke Ubiquity and start typing:

share This is a sample Delicious post that's not too different from one I would normally do, except a bit shorter and more fictional. tagged ubiquity entitled foo ubiquity test.

then suddenly memory usage spikes up to 571,028K !!! The memory use gradually falls back down, but it climbs steadily and precipitously while I’m typing, and there’s a point beyond which Firefox becomes unusable. Maybe I’m a canary user because I’m a touch typist, and I’m typing faster than Firefox can garbage collect memory? I still can’t believe that Ubiquity could be consuming so much, though.

(Update: apparently I’m not alone.)

What blogging is (revisited)

I checked out a new people search engine ( on a link from Lifehacker and, of course, searched for myself. I was surprised to see a lot of discussion about an old piece I had written after the first Bloggercon, a two post thought stream called “What is a blog” and “Blogging and empowerment” that gave a technical definition of what a blog was, and then a sociological definition.

The responses, apparently for a high school class at City Arts and Tech in Digital Design (!–to Ted Curran, if you’re out there, drop a comment–would love to know how you incorporate blogging in your teaching), were interesting and made me go back and look at what I wrote again. Here are a few excerpts:

  • Peter Luc: “A blog can just be about anything you want it to be, from your daily lives to what you feel about something. Anyone can create a blog and start blogging right away… A lot of people use blogs to tell others what is going on in the world like what they see with their own experiences. This can replace the sites that people usually go to to check the daily news….Blogging has to do with relationships when you make it a personal blog. A personal blog to me can be like 2 people blogging about what they do in a day and the 2 people can share their day with each other. It’s kind of like when you pass notes during class to different people, but instead this is web based so you won’t get caught. :)”
  • Rukiyah Sanders: “Due to the increase in technology over the coarse of these past few years we are able to do so much we weren’t able to do back then.”
  • Brandon House: “There are no rules in blogging, one can make up things with their own mind. people have the freedom to express what they must. I believe that freedom of speech is one of the most powerful weapons and tools you can give to an individual with a mind.”
  • Holden Way-Williams: “i guess it shined some light on the mysteries of blogging, but for the most part it was not too helpful. blogging is very simple. you go online, and you write on this thing and everyone around the world can read it… the article was not interesting. the information was not very useful, and the guy who wrote it was pretty boring.”

Well, Holden, you got me. It was pretty boring. I was trying to make a real point, but got tangled up in the mechanics of blogging rather than focusing on the real thing.

Here’s what blogging is: It’s a person writing his thoughts down and sharing them with people online. For person, you could substitute a middle schooler or your grandma, or the CEO of a hospital. For sharing them with people, it could be the writer’s friends, or it could be somebody who’s Googling for something unrelated and comes across it months or years later.

What’s changed, in the five years since I wrote the original piece, is you don’t have to have a dedicated website of your own to blog. You can do it on Facebook or Myspace, or in short thoughts on Twitter, or in one of a million other places. The thing about Facebook that some folks don’t like is that the wider Internet can’t get the benefit of your thoughts, which is probably OK if you’re blogging to your girlfriend or boyfriend but might not be OK if you want people other than your friends to get into a discussion with you about something or learn what you thought about something.

For me, now, blogging is an investment in the future. When I write something in my blog, I make a bet that I’ll be interested in going back and using it again later, or that someone else will find it useful. It’s a bet that usually doesn’t pay off; I would guess that no-one has read three quarters of the stuff on this site. But sometimes it pays off big–like when a class of high school students thinks seriously about what I wrote about blogging, and you get to learn from what they thought about what you said.

And you get to learn that they take blogging for granted. Which is, in and of itself, pretty cool. When I was in high school, I didn’t have a public forum like blogging. (And I had to walk uphill, both ways.)

Not to slight anyone: here are other responses from Max Bizzarro, Roselle, Sschafra, Nataly, J. Pascual, Mara, Jessica Tang, Tatyana K, Hawkman, SJ, Noel, and Maureen. Hawkman’s response is maybe my favorite: “The fact that someone could have so much faith in a new idea as a means of solving age old problems is kinda funny, because there have been dozens of technologies that would supposedly solve such problems, but the results were never definitive.” Yes, you’re right, but on the other hand blogs were one of the things that helped get Barack Obama elected.

Google and publishers agree to sit down and make some money

New York Times: Google Settles Suit Over Book-Scanning. It’s good to see the book publishing industry come to its senses.

Now that the parties have agreed to revenue sharing from book sales and library use, it becomes even more clear that Google Books is yet another Internet mediated disintermediation. Google Books is probably the best targeted marketing vehicle for the book industry since the original Amazon, because of its reach and ease of use and its ability to make transparent the previously opaque covers of books to help us find useful content. I’ve personally found it more useful than the usual suspects (book reviews, bestseller lists) when it comes to finding useful research works; sometimes you need to read the original book to decide if it’s useful to you rather than relying on third-hand opinion.

Here’s to a win for all involved–Google, book publishers, and above all, for you and me.

Test driving Google Reader

One of the downsides of being an early adopter in some areas is that I’m a late adopter in many others. I was using a desktop RSS aggregator back in 2002 (Radio Userland, then NetNewsWire) and so came late to the web-based news aggregator market. When I did hop on board, I used Bloglines, one of the early web based aggregators, and so missed out on Google Reader. I’ve stuck with Bloglines because it works and because it works well on the iPhone.

Yesterday, Bloglines wasn’t working. I haven’t seen anything posted about this, but while the site’s UI was up I didn’t get any new results for any of my 175 feeds from about 11 AM on. So in the early afternoon I decided to give Google Reader a spin.

One of the nice things about feed readers is that it’s pretty easy to take all your feeds to a new reader, thanks to OPML (one of Dave Winer’s many innovations in this area). Most feed readers support exporting your feed list to OPML, a structured XML format, and support importing feed lists from OPML. So you can pack up your feeds and easily bring them to a new place–minimizing vendor lock-in. I did that with my Bloglines feeds and was up and running quickly in Google Reader.

One thing that struck me almost immediately was the poorer UI in Google Reader. While it uses the same left pane navigation–right pane reading metaphor as Bloglines, the left pane is cluttered with a bunch of stuff on the top–starred items, trends, shared items and notes, a big help pane, and THEN your list of feeds. Bloglines’ feed list takes up the whole left pane and is just your content–much easier to manage–while other information like your personal blog and “clippings” are in separate tabs. If you’re just interested in reading feeds, Bloglines’ navigation is easier and less cluttered.

The right pane UI is a little better too, imho. I find the separate drop-shadowed feed boxes in the expanded view (what NetNewsWire used to call “smash view”) distracting; Bloglines’ zebrastriped list is visually flatter and doesn’t get in the way of the content. And I can’t imagine a use for the list view for most of my RSS feeds; though perhaps the notification-only ones are better suited for this kind of presentation, I can’t imagine trying to read BoingBoing or even Krugman this way.

Google Reader does feel a little snappier–feeds update more frequently and quicker. But the reading experience is actually slower, because items don’t get marked as read on display, but only if you scroll them off the screen. That might be beneficial for some people, but I’m a quick scanner and like to run through the feed list quickly. And because Google Reader doesn’t fetch all the items in a folder at once, dynamically fetching items as the user scrolls, there’s no way to quickly scroll to the bottom and read everything all at once. You have to wait for the fetch to catch up, then scroll to the bottom again.

So this morning I was pleased to see Bloglines is back online. I’ll still test out the Google Reader iPhone experience, because there are things that don’t quite work for me in Bloglines’s. But I’ll be continuing to use Bloglines in my browser.

Ubiquity: it’s big, big, big. For geeks, anyway.

I installed the new Firefox extension Ubiquity yesterday and just got around to going through the Ubiquity 0.1 User Tutorial today. It’s seriously like nothing I’ve ever seen. Well, not exactly true: it’s like putting a Unix command line together with Quicksilver and Greasemonkey and Google and Wikipedia and…

So OK, it’s amazing. The ability to highlight text and type commands like map, wikipedia, and translate isn’t a game changer–Microsoft’s Smart Tags in IE6 (which appear to be making a comeback in IE8) did the same thing, putting the commands into a tag with a dropdown menu on the web page–but the ability to put the results back into the web–replace an address with a Google map, inline translation–to affect the DOM of pages you’re viewing right now is.

Which makes me wonder: what’s the security model for Ubiquity? You clearly have to opt into downloading a Ubiquity command, but what guarantees do we have that it can’t do something malicious? Like, say, cross-site request forgery?

The other question, of course, is: outside the universe of people who care about things like Quicksilver, will anyone care? It’s probably too early to say, but it’s already made me more productive–every link in the article above was looked up via a search through Ubiquity with no tab switching, no leaving the WordPress popup, nothin’. There are some things that could be done to improve the process–I’d like a command that starts with a Google search, then ends with the URL on my clipboard or even inserting the link right into my WordPress text edit window–but that’s what “teaching Ubiquity new commands” is all about, I guess.

Breaking: Technorati acquires Blogcritics

I was just wondering the other day: what happened to Technorati? Apparently they’ve been reinventing themselves as an advertising and media company. The latest step: the acquisition of Blogcritics, the open cultural criticism site for which I’ve written in the past and may do again in the future. Announcement on the Technorati Blog; coverage at TechCrunch, which estimates the deal size at around $1 million.

Congrats to Eric Olsen and the rest of the Blogcritics crew. Putting the site together was a lot of work and keeping it running has been even more, and it couldn’t have happened to a nicer guy.

Get a jump on Download Day

Courtesy a little bird, it’s possible to download Firefox 3.0 already, though it hasn’t been announced yet.

The latest public download is RC3:〈=en-US

but if you remove rc3 from the URL, you get:〈=en-US

which is a valid URL. (So much for security by obscurity.) Enjoy your early start on Download Day! (Tip o’ the hat to Dil.)

Update: Or not. The version string in the -3.0 version is the same as the one in the RC3 version about box. Oh well.

CSS fixin’: toward a vertical grid

It should be theoretically possible with CSS to design a page where the type falls on a vertical grid. In reality, you rarely see this happen because multi-column sites make matching the grid values across the columns difficult, and browsers, particularly IE, have awkward ways of inserting inconsistent space around some block elements.

But the basic theory is simple enough. Decide the base unit of height of the page, and go through your stylesheet, making sure that everything there is a multiple of the base unit of height. Your tools are line-height, padding-top and padding-bottom. For example, if your base measurement is 20 pixels between lines of type, and you want space between your paragraphs, you might define a style like p { line-height: 20px; padding-bottom: 20px;} or even p { line-height: 20px; padding: 10px 0;} (where the latter splits the padding above and below the paragraph).

Getting it to work right can be a real bitch, though. What if you have a heading that you want to set larger than 20 points? The naïve approach (which I just implemented) might be to implement a rule like this: h3 { font-size: 24px; line-height: 28px; padding: 8px 0 4px;} where the 28 pixels of line height are padded with a total of 12 pixels to make up 40, or two lines. But this only works if all your h3s are less than one line in height; a heading spanning two lines will take up 66 pixels, or a little more than three lines, messing up the grid. The recipe for success is to avoid setting multi-line headings too closely–or perhaps to use smaller font-sizes for headings.

Here, as in all things related to web typography, a valuable resource is The Elements of Typographic Style Applied to the Web, the brilliant adaptation of Robert Bringhurst’s essential typography rulebook to CSS+HTML+ (occasionally) JavaScript. In this case, the sections on vertical motion (Choose a basic leading that suits the typeface, text and measure and Add and delete vertical space in measured intervals) are invaluable, and I’m going through this theme’s stylesheet and working on applying the principles now. So if things look odd, don’t worry, it’s not just you.

I should also point to 8 fonts you probably don’t use in CSS, but should as the inspiration to change my sidebar headers to Gill Sans (though I might pick a different sans in a day or two), and 10 Examples of Beautiful CSS Typography and How They Did It for the specific inspiration to use small, capitalized, letterspaced sans serif for the headings. Both are quite well written posts from the blog at 3.7 Designs.

Using a Google Maps Gadget in Google Sites

I’m working on a Google Site for a group I belong to, and it’s been an uneven experience. The user experience for general editing is quite good, but things get very hairy very quickly when you try to insert “gadgets” onto the page. I assumed that adding, for example, a Google Map to a page would be trivial, but working with the official Map Gadget was frustrating as there was no help documentation on how to populate the “data source URL” field.

I finally dug deep enough to turn up the answer. The Google Map gadget was intended to work with Google Spreadsheets. To get a map on your Google Site showing your map content, here’s what you do:

  1. Create a spreadsheet in Google Docs.
  2. In the first column of the spreadsheet, type in the address as you would search for it in Google Maps. Full street addresses seem to work quite reliably.
  3. In the second column, enter some text that describes the item–this becomes the tooltip for the marker on the map. Repeat for as many items as you like, making sure not to skip rows.
  4. For debugging, I like to add a Map Gadget to the spreadsheet to check my data. Tell it to use Sheet!A1:B10 (or however many rows you have) and to use the last column for tooltips, then Apply and Close.
  5. Verify that the map looks right, then click the map so the title bar is showing, click the little menu in the upper right hand corner of the gadget, and choose Get Query Data Source URL. Choose Entire Sheet and select and copy the resulting URL to the clipboard.
  6. In your Google Site, edit the page you want to put the map on.
  7. Insert a map gadget into the page using the Gadget browser. Fill in the fields, pasting the URL you copied in step 5 into the Data Source URL field, and checking the option to use the last column for tooltips.
  8. Save your changes to the page and behold: a map with markers for all your attractions.

This gives you a pretty basic map view with control over the markers, the height and width of the displayed map, and tooltips. For more advanced map control (street view vs. satellite, zoom level, etc.) I think you’d have to embed raw calls to the API, but I’d be thrilled to be proven wrong.


In the last few days of the primary season, I’ve become utterly addicted to NewsJunk, Dave Winer’s new aggregator for political news and commentary. I’m not sure how, but the site has managed to maintain a high signal to noise ratio while still reaching far beyond the usual news sites. Last night, for example, it turned up the transcript of Obama’s speech… before he gave it.