A lost day due to travel. I’m not bouncing back as fast as I used to. More tomorrow.
ITxpo: Keynote with Eric Schmidt of Google
The keynote interview with Dr. Eric Schmidt, CEO of Google, was as interesting for what he didn’t reveal as what he said. Eric has a bracingly dry sense of humor about his business and the industry, but he is deadly serious about the company and its responsibilities and challenges. He is also skilled at the art of offering insightful answers that do not directly answer the questions he is asked, so be forewarned as you read my notes. (Unless noted, everything below that is not a question is a paraphrase or direct quote of Eric.)
Q:Has the Internet leveled the playing field such that the dominant players in the industry can’t exercise their power? A:You know, “dominant” has a specific legal meaning…
There have been a lot of changes in the industry over the last 10-15 years. Back then, email was something you had to get in the car and drive to the office to do. I’d drop my daughter off, go to the office, do email, finish email and then hop in the car and go home…
(On how Google solves the performance and responsiveness issues that other companies might face online:) The problems are easier for Google because our data tends to be more static and lends itself to being replicated, as opposed to transaction based data that changes frequently.
(On Google’s internal systems🙂 When I got to Google we were going to build our own financial system because we were frustrated with the limitations of Intuit’s Quickbooks and its five-user license. Seriously. So I said that isn’t going to work exceptionally well when it comes to Sarbanes Oxley and auditing. We implemented Oracle Financials and put it in a box, and said “we aren’t going to change this.” Then we built a system around it to manage the business’s contact with it and we change that frequently.
Our focus is on personalization and comprehensiveness. For instance, you can see Google internet search results, your intranet (with the Google search appliance), and your hard drive all in one results list. Of course that brings in the issue of privacy, and that is where the “don’t be evil” corporate culture comes in. …Anyone can pull the ripcord and say “that’s evil” from an end user perspective and stop the train. [And no, Dave, I didn’t get a chance to ask him about autolinking, and he didn’t volunteer an opinion.]
We’re not in the information technology business, we’re in the information business. We have only digitized a very small percentage of all available content. There is a lot of room in the market.
(On advertising🙂 We don’t run the business as an advertising business. We run it based on end user satisfaction. If we keep our users satisfied, we keep our ad inventory up, which keeps advertisers happy. Q: You’ve taken away ad revenue from magazines and other traditional content players…. A: I prefer to think we’ve grown the market. There is a growing shift to more contextual ads and we are playing in that market. Q: You also seem to be important to a very large base of very small companies…. AIt is scary to understand that you are fundamentally in the revenue chain of a small business. We disseminate that information across the company and use it in planning products.
Q: You hired the lead developer of Firefox. Are you going to build a browser? A: We decided a long time ago that we would pursue a browser independence strategy, so that our services would work well on all browsers. These people, like the one you mentioned, are working on that, and do important work with the open source community as well.
Respecting your customers
Catching up on non-ITxpo related topics this morning, two things caught my eye. First, my delayed reaction to the announcement that the New York Times will be putting some of its content, notably op-ed columns, behind a for-pay wall starting in September. This is of course brilliant because the Times’s editorial opposite number, the Wall Street Journal, has its constellation of right wing editorial columnists available for free. So now there will be even less of an opposing voice online. What’s most depressing as a user and reader of the Times is that this move comes after a history of reader-and-blogger-friendly decisions, including RSS support. So long, NYT, we’ll miss you. Is there an editorial forum out there that wants to stay on the record, and stay in the conversation? (For straight news, the BBC is looking better all the time.)
Second, the announcement from Microsoft about their new ID infrastructure, InfoCard. On the surface, the announcement sounds a lot like Apple’s Keychain; a local system solution to hold identity information such as login names, passports, and certificates. The difference is that InfoCard, like its failed Passport predecessor, can also hold credit card information. The shift in Microsoft’s identity management strategy, from central control to user application, represents a clear victory for Microsoft’s customers, and may be a pretty good indication that Microsoft is doing a better job of listening than it was four years ago. (More information about InfoCard, including a description of the user experience and some underlying technology notes, courtesy Johannes Ernst.)
Connection? Your customers will be the people who tell you whether your new business plans will succeed or fail. Learning to listen to them is an essential skill that must be mastered if you are to compete.
—Which gets me nice and warmed up for the final session I’ll attend at ITxpo, Are Your Customers and Users Revolting?, where three Gartner analysts will discuss customer collaboration and communication technologies and the implications for enterprises. I’m going to see if I can arrange some sort of connectivity in the room so I can blog the session, but otherwise I’ll take notes and post later.
ITxpo: Cisco and IBM
Charles Giancarlo and Steve Mills are on a panel with two Gartner officers. Giancarlo defines “complexity” as anything that causes customers headaches in implementation. Customers want simplicity, by which they mean that solutions are thoroughly tested for their environment, not fewer features. He points out that we simplify the mundane (networking protocols) and then build greater complexity atop the newly simplified stack. He also correctly points out that the desire for differentiation is a big source of complexity.
Mills says that complexity in software is a reflection of the desire for more autonomy from central IT control, and stems in some respect from the decentralization of IT infrastructure starting with the shift to minicomputers in the 1970s. He also says that software will continue to get more complex: “software developers, given enough resources in time, will reinvent the work of everyone who’s come before them.” He suggests that encouraging reuse and adoption of open source can help to simplify the stack by not encouraging the development of new code, and that bloat is primarily caused by a cultural issue among software programmers. Giancarlo agrees: if improvement in software code doesn’t directly drive customer benefit, then it isn’t worth doing.
Mills says that defeating the developer’s cultural tendency toward re-creating the wheel requires: time-boxing, or defining short-term incremental projects; starving the team for resources; and embracing user-centered design.
(My battery is dying; more to come.)
Later: In discussing how to improve reliability in the face of proliferating software versions, Mills talked about reducing redundancy in code and reproducing known configurations and feature paths in the test environment. Giancarlo talked about lessons learned from integrating Linksys’s consumer business in creating a balance between complexity and functionality.
Causes: infrastructure is cumulative (earlier today someone said applications never die; the consensus appears to be that retiring outdated IT offerings is almost impossible). One thing that might help is, following Drucker, to continue to push good ideas down into silicon so that they move lower in the stack. Unfortunately, says Giancarlo, complexity grows organically: today’s skunkworks project is tomorrow’s killer competitive advantage, so it adds to the complexity. The wrong thing to do is to try to remove complexity from new products; rather, look at them when they provide comparatively less value and try to remove complexity then. Also, the multiplicity of architectures can be a political issue.
Why to reduce complexity through SOA: Mills: money is lost in the cracks between all the handoffs between different legacy systems. You need to add some IT complexity in end-to-end monitoring to enable the business to reclaim the money lost in its patchwork of pre-SOA systems. Governance is one of the most important solutions for reducing complexity.
ITxpo side note: blogging
One thing I find interesting is the small number of bloggers in attendance at the conference. I’m currently sitting next to the only other blogger on the conference blogroll, Boris Pevzner of Centrata (and of MIT Course 6 mid-90s). Feedster and Technorati don’t turn up many hits for ITxpo; in fact, Feedster notes that this site is the biggest contributor and helpfully offers to scope the search relative to this site, which is a little scary. Likewise, most of the hits in Technorati are follow-ups to press releases or announcements made at the symposium. Is there so little of value that happens at the ITxpo that no one has thought to do it before, or is it just that the sort of organizations that have embraced blogging are under-represented among the attendees?
Certainly the logistics have a way to go before the conference becomes truly blog-friendly, with none of the socratic dialog (or good WiFi) that characterize the best of the unconferences I’ve attended. But the presence of the conference blog is a nice first step.
ITxpo: Real Time Enterprises
Ken McGee is speaking about real time enterprises. He claims that with real time enterprises, which represents IT moving beyond its traditional boundaries into adding real value through real time monitoring and modeling of events in the company, that business uncertainty becomes “unnecessary and unavoidable.” The talk is an expansion on Ken’s book, Heads Up. The idea is to monitor, capture, and analyze root causes and overt events and use them to make near real time decisions.
One way to leverage the benefits of real time decision making, McGee suggests, is to publish financial information more frequently, which not only mitigates compliance risks (a la Sarbanes Oxley) but also has the side benefit of attracting investors eager to get more frequently updated information about the performance of their portfolio. Another is to investigate dynamic pricing. Both of these are predicated on taking known IT capabilities to the next level. In the case of dynamic pricing, this includes supply chain management, sales, electronic ink (for retail price display), wireless, and future demand prediction.
McGee also discusses decision criteria for what should be monitored in real time: only choose the information that, upon receiving it, would make a decision maker change her course of action (this does not include most “dashboard” information), and where such a decision would have a positive effect on the top ten revenue-generating (or cost) business processes. This turns out to yield a very small number of real time factors.
But he also says that IT people are going to be the most likely to block real time data gathering efforts. He doesn’t dive into this deeply enough, in my opinion. This is the area that might really yield some insight into the dynamic between decision making and IT.
ITxpo: Service desk best practices and methodologies
This morning’s first session for me, Excellence in IT Service and Support, was a review of best practices in service desk management. A few interesting data points came out in the context of the talk, including informal survey results (of Gartner Data Center conference attendees) indicating that about 62% of respondents intend to adopt ITIL, either alone or with other methodologies; the number for ITIL alone was 31%.
This was interesting to me because the further in ITIL one gets away from service desk and incident and problem management processes, the more the coverage of the standard starts to overlap with other well-defined process libraries. For instance, processes in ITIL, including release management and change management, dealing with internally developed applications have a natural overlap with the Capability Maturity Model Integration (CMMI), which has software development as its focus. Organizations with internally developed applications need to consider not only which portions of ITIL, but also which portions of other methodologies, they may need to adopt as they carry out process improvements.
ITxpo: IBM announcement press
Some follow up links on IBM/Tivoli’s ITSM announcements yesterday.
CNET: IBM’s Tivoli tackles IT processes: “The majority of application failures are due to changes that get introduced to a working system, said Bob Madey, vice president of strategy and business development for Tivoli.”
ComputerWorld: IBM unveils Tivoli systems management software. “The idea of having a centralized database for tracking IT assets in an organization arises from long-standing recommendations by the Information Technology Infrastructure Library (ITIL) and other systems management groups. BMC has already announced a centralized database, but IBM believes a federated approach makes more sense because many companies have infrastructure databases already, Madey said.”
The MarketWire release talks about the expansion of IBM’s Open Process Automation Library aka Orchestration and Provisioning Automation Library(OPAL) to IT service management, a point I failed to capture yesterday because I need to know more about it before I fully understand the implications.
Follow up: iChat issues in Tiger
Yesterday’s update to Tiger does not address the iChat issues that many newly upgraded users are having, and this new support article, “iChat AV 3.0: ‘Insufficient bandwidth’ messages,” indicates why. The article suggests that iChat’s newly added QoS features are (irony alert) breaking the iChat experience for many users, since the DSCP used to implement the feature is blocked by some ISPs.
Question: what do you call a change to an application that breaks existing functionality for many users? Where I work, we call that a regression bug, not a feature.
ITxpo Monday wrap-up: blogger meeting call
Well, that’s it for my first day at ITxpo. I’m off to Toronado.
Hey, a quick thought to other bloggers at the conference: while Gartner is mediating a networking breakfast on blogging on Thursday, I’ll be heading out on Wednesday night. If you’re interested in having an impromptu blogging meet-up on Tuesday or Wednesday—in or out of the scope of the conference—ping me.
Afternoon impressions: CMDB conquers all
After the posting flurry of the morning, my battery (laptop and biological—damned jet lag) ran out, so I caught a nap before heading back to the Moscone for the afternoon sessions and the opening of the ITxpo floor. The sessions that I attended after the IBM presentation were complementary, so I’m going to wrap my notes up in one post.
The session on ITSM and IT Operations was a good overview, discussing the IT Management Process Maturity Model (an IT-centric take on the CMM), with an emphasis on building the service portfolio and treating it as a marketing document. The session on Configuration Management focused on tools that map dependencies between assets (hardware, networking, software and software components); this sort of approach is required if you are to provide effective support for an end-to-end service in the enterprise.
Both sessions had a fairly grim assessment of the move toward service-oriented IT service management. The ITSM session stated that through 2008, 65% of organizations will be focused on non-service-focus metrics like availability of individual servers, rather than end to end availability; and that it would take about five years for application development to start mapping service dependencies as part of the development process. Both sessions agreed that at least for the next year to 18 months, configuration management was going to be a largely manual activity.
It seems, with the proliferation of tools that provide configuration management for some piece of the IT puzzle, that the opportunity is for someone to provide a general standards-based interface to span across multiple CMDBs and to create connections and integrate key ITIL processes, particularly change management. But the cost could be dear: Ray Paquet estimates that, depending on the scope of your configuration tracking, you could have as many as 100,000 assets in a moderate-sized organization, and that the number of relationships that could be tracked scales exponentially. The challenge is to provide intelligent oversight across these databases but to provide drillthrough where required so one solution isn’t holding all the data.
IBM: Making ITIL actionable
IBM unveils a stack of IT Service Management products, including new products and fourteen enhanced products. Key is an open federated configuration management database. Positioning as helping provide tools and practices to make ITIL more implementable. More coming…
IBM says that they’re uniquely positioned to bridge development and infrastructure, between Rational and Tivoli. This positions them to help make IT services more manageable.
Organizations have choices: implement lots of point solutions to fix specific problems, adding complexity; outsource, possibly losing a competitive strength; or apply ITSM. Customer testimonial from Patty Medhurst at Royal Bank of Canada. Complex environment, automated testing, etc. First step to ITIL is documenting the service catalog, which is a “gruesome but necessary” process; next step is automating key services.
Steve McMillan, leader of IBM Integrated Technology Services: moving from resource management to systems management to services management. In services management, need models: business reference model, process reference model for IT, and implementation reference model for IT. Business reference model: uses Component Business Model from PwC; move those into IT Process Reference Models and then into a Unified Process Model. (Aside: So far this is about architecture and modeling, not management…)
Customer testimonial 2: Don Woodward, American Express. He talks about using simulation tools (WebSphere Integrator) to build a simulation of the end to end infrastructure and to model the effects of process improvement on cycle time.
Now, focus on change management. Existing products fit in different steps. Now the new IBM Change and Configuration Management Database (CCMDB) maps changes in relationships and business data, plus policy and workflow actions. IBM IT Service Management: new platform in IBM Tivoli CCMDB. IT Process Managers: workflow capability, customize workflow, enable application of IBM/Tivoli technology to each step of process. Plus of course services.
IBM Tivoli Unified Process: process reference model for IT, aligned with ITIL. “Customers don’t have the sophistication or capability to map their processes, so here’s this online tool to help you do that…” And OPAL: expanding to auto-discovery. The process modeler is graphical, providing detailed workflow and activities for each step.
Other points: enhancing Tivoli Configuration Manager and Provisioning Manager with patch management and autodiscovery; new Federated Identity Manager, enabling extension of Tivoli Identity Manager out across the supply chain and to customers.
(Incidentally, here’s the press release—took quite a few clicks to find it.)
The CCMDB: Integrates supporting products and higher level processes, includes a process workflow and modeling engine, supports policy administration, provides support for configuration items and relationships, autodiscovery, reconciliation of discovered data, data federation. Works with Peregrine and Remedy.
Focus for Tivoli Process Solutions: Release Management, Availability Management, Integration Life-Cycle Management.
ITxpo: Web Services lead presentation
Web Services in the Enterprise (Frank Kenney): Not about having services but about controlling them. Theme emerging from conference so far: important thing is to ensure that what is provisioned is supported. Show customers (partners, end users) that management layer is in place. Kenney discusses ESBs, APS and middleware, as well as vendor strategy, in considering the management of web services. (Wonder what Radovan Janecek at Systinet would say about the ESB point.)
I’m going to diverge from Frank’s talk for a second and bring it back into an IT Services Management framework. ITIL would say that there are several processes that connect to a hypothetical web service: service level management, configuration management, and availability management are key. Unfortunately a lot of ITIL implementations begin with a focus on incident/inquiry management and (if you’re lucky) problem and change management. This is good, but it’s still a reactive situation. If you’re managing proactively, you’re monitoring your services so that you can actively gauge when you’re meeting your service levels, not reacting when someone tells you you’re out of compliance, at which point it’s too late.
The issue is that there are tradeoffs in how you monitor the services. Frank covers that, but also has some interesting insights about the market as a whole: considering 12 major vendors in the market, there’s less than $50 million in 2004 revenue across all of them and fewer than 100 production customers. Also consider that many of the firms are on their second or third rounds of funding, and consider their exit strategy — likely acquisitions — before you buy.
Implementing ITIL
Implementing ITIL (Shafqat Azim): suggestions include defining service catalog up front, defining dependencies (messaging systems, reporting systems); manage communications about benefit of process up to management and out to users; have a clear taxonomy and way of describing the scope of your initiative; have a clear vision of where you’re going; look at a reference model with enough depth to know the challenges and dependencies that you will be facing. Consider the level of maturity of processes on which you are dependent. Focus your effort on a handful of processes to refine, but consider how to mitigate the weaknesses of the other processes.
Validate process through use cases of particular issues—implementing a server, responding to change, etc.
Once you have the organizational requirements and use cases, think about tools. The process comes first; tools come second. (Is this ever not true?) Use use cases to develop matrix of functionality for automation, and have a bake-off.
This is all fine, but: I wonder where the trade-off is between process paralysis and avoiding useless tool expenditures.
ITxpo Keynote: managing complexity
The opening Gartner keynote is a tad condescending in the opening (who told Gartner’s CEO to cite “portal software” as an example of complex new IT challenges?), but quickly gets more interesting and starts delivering some insights, including: a quote attributed to Ezra Pound (“Man is an overcomplicated organism. If he is doomed to extinction it will come from a want of simplicity”—source?) and a citation of the Law of Requisite Variety. Complexity is a bell curve; claims there is an inflection point beyond
which if you make a system more complex value diminishes rather than increasing. They
suggest a consideration for review and acceptance of technology: “positive return on complexity.”
They make a very strong case that one way to mediate complexity is through process. Sounds good to me—that’s what ITIL is all about. Also useful: consider where to place complexity: away from the users. Don’t make IT’s life easier at the expense of the end user experience.
There’s a tradeoff between business needs and managing complexity. Also deep relationship between complexity and change management. Framework for understanding n-order change (this is when people start leaving): understand interaction between cultural systems (org structure, people) and technology systems (technology and tasks). First order change: tasks affected. Second order change: tasks and people. Third order change: affects every single variable, e.g. ERP. Fourth order: affects partners, customers, and other external connections.
Here comes the sale: there is a Gartner decision framework that helps you think across the strategic issues in managing complex new projects.
(Aside: wonder if there is a point to be made that blogging got adopted and is a profoundly transformative technology precisely because it’s less complex than other knowledge management tools?)
Managing complexity checklist:
- Standardization: can reduce complexity or increase capacity to manage more complexity. One consideration: number of vendors and products, but be careful of lock-in.
- Automation: from the traditional (adding additional tools to replace human labor) to augmentation (e.g. expert systems). Can hide complexity, but frequently not from IT; also raises the bar for IT staffing and pay.
- Best practices:
- Don’t do everything at once.
- Buy what you need, not what you might need.
Remember process! This is the connection between the organization and technical complexity.