ITxpo: Cisco and IBM

Charles Giancarlo and Steve Mills are on a panel with two Gartner officers. Giancarlo defines “complexity” as anything that causes customers headaches in implementation. Customers want simplicity, by which they mean that solutions are thoroughly tested for their environment, not fewer features. He points out that we simplify the mundane (networking protocols) and then build greater complexity atop the newly simplified stack. He also correctly points out that the desire for differentiation is a big source of complexity.

Mills says that complexity in software is a reflection of the desire for more autonomy from central IT control, and stems in some respect from the decentralization of IT infrastructure starting with the shift to minicomputers in the 1970s. He also says that software will continue to get more complex: “software developers, given enough resources in time, will reinvent the work of everyone who’s come before them.” He suggests that encouraging reuse and adoption of open source can help to simplify the stack by not encouraging the development of new code, and that bloat is primarily caused by a cultural issue among software programmers. Giancarlo agrees: if improvement in software code doesn’t directly drive customer benefit, then it isn’t worth doing.

Mills says that defeating the developer’s cultural tendency toward re-creating the wheel requires: time-boxing, or defining short-term incremental projects; starving the team for resources; and embracing user-centered design.

(My battery is dying; more to come.)

Later: In discussing how to improve reliability in the face of proliferating software versions, Mills talked about reducing redundancy in code and reproducing known configurations and feature paths in the test environment. Giancarlo talked about lessons learned from integrating Linksys’s consumer business in creating a balance between complexity and functionality.

Causes: infrastructure is cumulative (earlier today someone said applications never die; the consensus appears to be that retiring outdated IT offerings is almost impossible). One thing that might help is, following Drucker, to continue to push good ideas down into silicon so that they move lower in the stack. Unfortunately, says Giancarlo, complexity grows organically: today’s skunkworks project is tomorrow’s killer competitive advantage, so it adds to the complexity. The wrong thing to do is to try to remove complexity from new products; rather, look at them when they provide comparatively less value and try to remove complexity then. Also, the multiplicity of architectures can be a political issue.

Why to reduce complexity through SOA: Mills: money is lost in the cracks between all the handoffs between different legacy systems. You need to add some IT complexity in end-to-end monitoring to enable the business to reclaim the money lost in its patchwork of pre-SOA systems. Governance is one of the most important solutions for reducing complexity.

ITxpo side note: blogging

feedster search results

One thing I find interesting is the small number of bloggers in attendance at the conference. I’m currently sitting next to the only other blogger on the conference blogroll, Boris Pevzner of Centrata (and of MIT Course 6 mid-90s). Feedster and Technorati don’t turn up many hits for ITxpo; in fact, Feedster notes that this site is the biggest contributor and helpfully offers to scope the search relative to this site, which is a little scary. Likewise, most of the hits in Technorati are follow-ups to press releases or announcements made at the symposium. Is there so little of value that happens at the ITxpo that no one has thought to do it before, or is it just that the sort of organizations that have embraced blogging are under-represented among the attendees?

Certainly the logistics have a way to go before the conference becomes truly blog-friendly, with none of the socratic dialog (or good WiFi) that characterize the best of the unconferences I’ve attended. But the presence of the conference blog is a nice first step.

ITxpo: Real Time Enterprises

Ken McGee is speaking about real time enterprises. He claims that with real time enterprises, which represents IT moving beyond its traditional boundaries into adding real value through real time monitoring and modeling of events in the company, that business uncertainty becomes “unnecessary and unavoidable.” The talk is an expansion on Ken’s book, Heads Up. The idea is to monitor, capture, and analyze root causes and overt events and use them to make near real time decisions.

One way to leverage the benefits of real time decision making, McGee suggests, is to publish financial information more frequently, which not only mitigates compliance risks (a la Sarbanes Oxley) but also has the side benefit of attracting investors eager to get more frequently updated information about the performance of their portfolio. Another is to investigate dynamic pricing. Both of these are predicated on taking known IT capabilities to the next level. In the case of dynamic pricing, this includes supply chain management, sales, electronic ink (for retail price display), wireless, and future demand prediction.

McGee also discusses decision criteria for what should be monitored in real time: only choose the information that, upon receiving it, would make a decision maker change her course of action (this does not include most “dashboard” information), and where such a decision would have a positive effect on the top ten revenue-generating (or cost) business processes. This turns out to yield a very small number of real time factors.

But he also says that IT people are going to be the most likely to block real time data gathering efforts. He doesn’t dive into this deeply enough, in my opinion. This is the area that might really yield some insight into the dynamic between decision making and IT.

ITxpo: Service desk best practices and methodologies

This morning’s first session for me, Excellence in IT Service and Support, was a review of best practices in service desk management. A few interesting data points came out in the context of the talk, including informal survey results (of Gartner Data Center conference attendees) indicating that about 62% of respondents intend to adopt ITIL, either alone or with other methodologies; the number for ITIL alone was 31%.

This was interesting to me because the further in ITIL one gets away from service desk and incident and problem management processes, the more the coverage of the standard starts to overlap with other well-defined process libraries. For instance, processes in ITIL, including release management and change management, dealing with internally developed applications have a natural overlap with the Capability Maturity Model Integration (CMMI), which has software development as its focus. Organizations with internally developed applications need to consider not only which portions of ITIL, but also which portions of other methodologies, they may need to adopt as they carry out process improvements.

ITxpo: IBM announcement press

Some follow up links on IBM/Tivoli’s ITSM announcements yesterday.

CNET: IBM’s Tivoli tackles IT processes: “The majority of application failures are due to changes that get introduced to a working system, said Bob Madey, vice president of strategy and business development for Tivoli.”

ComputerWorld: IBM unveils Tivoli systems management software. “The idea of having a centralized database for tracking IT assets in an organization arises from long-standing recommendations by the Information Technology Infrastructure Library (ITIL) and other systems management groups. BMC has already announced a centralized database, but IBM believes a federated approach makes more sense because many companies have infrastructure databases already, Madey said.”

The MarketWire release talks about the expansion of IBM’s Open Process Automation Library aka Orchestration and Provisioning Automation Library(OPAL) to IT service management, a point I failed to capture yesterday because I need to know more about it before I fully understand the implications.

Follow up: iChat issues in Tiger

Yesterday’s update to Tiger does not address the iChat issues that many newly upgraded users are having, and this new support article, “iChat AV 3.0: ‘Insufficient bandwidth’ messages,” indicates why. The article suggests that iChat’s newly added QoS features are (irony alert) breaking the iChat experience for many users, since the DSCP used to implement the feature is blocked by some ISPs.

Question: what do you call a change to an application that breaks existing functionality for many users? Where I work, we call that a regression bug, not a feature.