December 06, 2006

Digital Music, Piracy, and IT… oh my…


I’m back on the digital music player front again.  I haven’t had the honor of playing with one yet, but I’m hearing that Microsoft has slipped on the installation front with the new Zune.  It’s not easy. Which is too bad, because the beauty of the Apple product is that even your pet can install an iPod, download music and go.  Which leads me to another of my little hobby horses around digital music players.

The entertainment industry is in a tizzy over changing business models.  People are moving to downloads – legal and illegal – and away from the original models of purchasing entire disks on the portable format du jour that had limited copying capability.  

Here’s a big secret – what customers REALLY want – they want easy access to the content at a reasonable price.  It’s that simple.  

Make your software and purchasing of music easy and cheap (the Apple model) and customers will flock to it. Make things complicated, difficult, etc. and customers will get annoyed, which seems to be a specialty of many technology vendors.

When we speak about IT’s customers, we’re usually referring to corporate customers (outside the consumer space), and we spend a lot of time talking about how their business models are changing.  And we discuss how good customer experiences are critical to their business’ overall success.  Many businesses want to use technology to improve the customer experience on the notion that happy customers are loyal, repeat customers who will spend more with the company they like.

Compare that to the entertainment industry, where instead of looking for new models, we see an antagonistic approach toward customers, with companies claiming they need to defend their old way of doing things.  And surprise surprise, sales are dropping, and consumers are constantly coming up with workarounds for whatever the industry comes up with to protect intellectual property.

Please note here, because I get this question asked of me all the time – I DO NOT CONDONE PIRACY OF INTELLECTUAL PROPERTY.  Analysts also have intellectual property and we appreciate NOT having it stolen, so I do appreciate why companies try to protect it.  My complaint is that the argument is frequently positioned as “we must maintain our old business model because all customers are  pirates/condone piracy”.  How twisted is that?

It seems sometimes that IT vendors are aiding and abetting bad business models rather than using innovation to create newer smarter models.  Industry people I’ve spoken to have actually challenged me, arguing that most people will steal as it’s easy to do and most people will not do the right thing.  I disagree with this, but the fundamental nature of man has been argued by philosophers for generations and this blog is not going to pick up that line now, but it is an assumption that needs to be discussed.  
I believe that the problem a lot of people have is that they know perfectly well that they are paying for content they like but most of it is not going to the artists.  In fact, they’d rather remove the middleman and give more money directly to artists.  The middlemen have frequently supplied marketing and distribution.  The Internet is making it easier for artists to do their own marketing and distribution without relying on the middlemen.  That’s not to say that record companies have no value, but it is changing, shifting, and instead of exploring it, they’re basically digging in their heels and refusing to budge but hiding behind the notion of protecting artists’ intellectual property.

I’ve got a couple of Sony CDs that I purchased in Germany of a German artist I like.  However, I now play all my music electronically, and those CDs have copy protection software which – according to the label – will damage my computer/CD drive if I insert it in the computer.  I don’t listen to those CDs anymore, which means I don’t listen to that artist anymore.  Those CDs have no more value for me. I am being punished, and I did nothing wrong other than to want to play those CDs on something other than a traditional CD player.

In the end – the IT vendors can decide to defend an aging business model and declare that all customers are really pirates and that the record companies are virtuous, or they can decide that customers come in all types, some honest, some dishonest, that it is bad business to assume a dishonest customer and they can work with the customers and the companies to find new models that better benefit artists, middle men and consumers.

November 16, 2006

Is Sun Rediscovering its Creative Roots?

I recently attended a dinner party in San Francisco, courtesy of Sun Microsystems, that featured guests from eBay, PG&E, and of course Sun. Despite the abysmal traffic that all attendees faced in getting to downtown on a rainy night, eventually 25 or so folks enjoyed dinner together. Although one could easy have assumed that the topic de jure would have been about Sun’s or eBay’s latest product offering, the theme of the evening was about energy conservation. PG&E was touting some of its latest energy conservation initiatives aimed at the data center. Although the California utility is well known for its programs to prod customers into retiring aging and inefficient household appliances and lighting, its efforts targeting at IT infrastructure efficiency are less well known. Sun discussed the various rebates from PG&E that applied to new energy thrifty Sun servers and eBay talked about how they are becoming more energy efficient in their datacenter.

In other matters new and different, Sun recently announced Sun Startup Essentials, a program designed to enable early-stage companies to deploy Sun technology at price points commensurate with start up businesses. The announcement followed Sun's Startup Camp, which gathered more than 400 entrepreneurs to share their insights and experiences about turning ideas into viable businesses. Sun products eligible for discounts include its energy-efficient, Sun Fire x64 and Sun Fire T1000 and T2000 servers, as well as x64 and UltraSPARC-based workstations. According to Sun, the UltraSPARC-based Sun Fire T1000 and T2000 servers deliver 5x the performance of competitive servers, using 1/5 the power and 1/4 the space and are the only systems to qualify for a power rebate from PG&E.

The company also announced it is releasing its implementations of Java as free software under the GNU General Public License version two (GPLv2). Included in this release are the first pieces of source code (Java HotSpot [jvm], Java compiler [javac], and JavaHelp) for Sun's implementation of Java Platform Standard Edition (Java SE) as well as a buildable implementation of Java Platform Micro Edition (Java ME). In addition, Sun is adding the GPLv2 license to Java Platform Enterprise Edition (Java EE), which has been available for over a year under the Common Development and Distribution License (CDDL) through Project GlassFish. In addition to CDDL, Project GlassFish will also be available under GPLv2 in the first quarter of 2007.

OK, so Sun has been busy making announcements. In and of itself, this is no big deal, but when one stops to think about what is contained in the announcements, there is something much bigger to be gleaned. All of these activities represent the kind of game changing marketing that was historically associated with Sun. More than just saying, “Ours is better than yours” these announcements construct new opportunities through efforts to create new markets as opposed to simply playing king of the hill on tried and trued marketplaces. By emphasizing a new approach to financing startups, Sun is quite possibly bringing new customers into its fold that would likely otherwise have gone for the white box, or highest discounted price available approach. With its emphasis on energy efficiency and partnership with PG&E, Sun is looking to change the rules by which products will be assessed, and has brought in some creative outside financial incentives to help change the playing field as well. By releasing Java under the GPL, the company has finally changed its outmoded view that a key software should remain under Copernican lock and key and has instead chosen to join open community process with hopefully, further deployment of Java as a truly priceless technology.

This kind of creative behavior, as also noted with Project Blackbox, is a welcome return by Sun to its own startup mentality of snarky, dynamic, and a historically smart competitive playbook. Technology is cool and fun, if you are a geek, but for the rest of the market, it is a tool, or worse, a mystery. For those seeking tools to make their businesses operational competitive, however, technology is an afterthought, function is the paramount concern. Getting the message out to potential customers is a long term endeavor, but one that pays substantial dividends for those who have a differentiating message and take the time to articulate it.

While it was a raining evening outside of the dinner party, inside we were treated to an illuminating look at how some traditional, and not so traditional, IT players were seeking to create new opportunity and bolster competitive advantage. It is this very kind of discussion and general business focused thinking that the marketplace is looking to vendors to provide, and happily, at least some are beginning to step up to the plate. Now of course, my trip in and out of downtown would have been more efficient, and actually less costly, if I had only taken the subway rather than driving, I would have experienced greater energy efficiency, and saved time and money. Perhaps I should have listened more intently to the presenters at the dinner party.

November 15, 2006

How to turn a digital music player into a mobile phone

This week Microsoft launched the much anticipated Zune and the world trembled to see if finally a new device could take on the Apple iPod.  While there are plenty of other digital music players out there, none have been able to reach the popularity of the iPod.  Microsoft has thrown the gauntlet down and they think they can succeed on a couple of fronts.

First, the Zune is fairly similar to the iPod so it’s not as though new ground has been broken on the design front.  There are those who like its larger screen and the fact that it’s a little longer than an iPod, and there are those who don’t.  There are those who like the fact that you can watch the video in either portrait or landscape mode.  But the real excitement according to the buzz is that Microsoft has built in wireless capabilities so that two Zunes can communicate with each other and music can be passed between devices.

This of course has led to lots and lots and lots of speculation over feature wars – something that technology vendors specialize in.

At this point I’m going to go out on a limb and say that they are missing the point.  I have a mobile phone (a really awful Panasonic that I dream of getting rid of but that’s another story) and all I want it to do is make and receive phone calls that last until I want them to end and not for arbitrary other reasons.  Yes I like features like a directory and ringtones.  However, I do not want to watch tv on my phone.  I don’t want to surf the web on my phone, and I do not want to listen to music on my phone.  I want to call or text people.  Period.  And the really annoying thing about mobile phones is that instead of coming out with more rational user-friendly software or superior sound quality or comfort they come out with more useless, overpriced, ridiculously unnecessary features.  Go ask 3 of your friends right now how many features they use on their mobile phones.  If you’re in America they probably don’t use any of them.  European football fans probably use a few more.

So the intuitive leap is this – I want my digital music player to play music with really good quality.  Then there are some other features – but I’m not going to get into that here either – but I assure you that they do not include watching films, watching videos, scrolling through photos or communicating wirelessly with other people.  

However, if you ARE going to enable wireless communication, make it interesting. (For the ironically impaired – the following is sarcastic) how about making them capable of sensing music that is on other people’s systems and if you find out that you have certain music in common they turn a nice sunny yellow?  Or if they are full of really annoying music you hate, a red flashing light appears to warn you to avoid that person as their taste in music is abhorrent (user-defined parameters naturally.) We could then create viruses that download say – the entire Mariah Carey catalog on to some unsuspecting listener’s device and then it dials up the RIAA and gives them your home address and a detailed listing of what’s on your device and how you got it.  Now those are features.

I’m not holding my breath on the Zune.  But I am looking forward to outrageous marketing wars from Microsoft and Apple.

November 03, 2006

Isilon Ahead by A Nose



By Susan Dietz

Isilon Systems today announced the release of a single file system that unifies and provides instant and ubiquitous access to digital content and unstructured data, the OneFS4.5.  This is the next-generation Isilon IQ clustered storage software system, and can power all of Isilon’s family of clustered storage software systems, including the Isilon IQ 1920, 3000, 6000, Accelerator, and EX 6000.  With the release of its OneFS 4.5 operating system software, Isilon ushers a larger capacity of data management that will deliver one petabyte of capacity and 10 gigabytes per second of performance in a single file system and single volume. This system is designed to eliminate the cost and complexity barriers of traditional storage architectures.  Among its features are the ability to scale to one petabyte of capacity, archive up to 10 gigabytes per second, and a data protection system wherein customers can withstand the loss of three or four simultaneous disks or nodes within very large clusters while at the same time maintaining 100 percent availability of all of the data.  The data protection system is called N+3 and N+4.  The stated goal of the new OneFS 4.5 system is to let enterprises bring their huge data archives online, thereby making them as accessible as any other critical business information.

Once upon a time, computer storage used to be measured by megabytes.  Isilon’s leap into the forefront of the storage arena seems to be generating quite a buzz with its introduction of mega storage, and has perhaps contributed to Isilon’s recent application for an IPO.  In conjunction with other recent announcements, these products may just be what launch Isilon into the big leagues.  With recent legislation concerning electronic discovery laws, storage systems and quick and easy access to data is suddenly much more important to a company’s success.  Legal fees are bad enough, but paying those fees for every hour that the legal team is pawing through year’s worth of data could conceivably break some companies.  So while storage may not be glamorous, it is vitally important.  Isilon’s new system may not rock the entire market, but it does raise the bar significantly.  The ability of the system to protect data from spontaneous loss is likely to be one of the more important selling points for the customers.

Gone are the dusty rooms full of file cabinets that the unlucky clerk was relegated to on a semi-regular basis.  The area of data storage has come into its own and is no longer an afterthought when a company sets up or renovates its IT department. Data storage is now just as if not more important than processor speeds and desktop applications.  Now, if companies can start keeping all of their data and not just the most current at their fingertips, it will most likely positively affect business models across the board from sales and customer service to accounting to electronic discovery.  A customer’s entire history could be accessible to anyone in the company at any time, thus most likely increasing sales and ensuring the customer has a positive experience regardless of which person within the company is their contact.

October 05, 2006

In Search of Another Monopoly: Intel and the EC

It has been reported that the European Commission review panel has weighed evidence bought against Intel in order to this week to choose whether or not to file formal antitrust charges related to alleged unfair competition against AMD. The Commission has been investigating Intel for several months regarding its rebate schemes to determine if these schemes violate regulatory conditions. If the decision is made to approve the case team assertion that Intel’s schemes are illegal, it would advance the Commission’s decision to make a formal Statement of Objections against the company. At this point, Intel would have the opportunity to respond to the charges. There was no deadline stated for the Commission to decide whether to take the case further.

Given the EC’s proclivity to investigate monopolistic practices ad nauseum, it is amazing that Intel has largely escaped unscathed so far. There are few IT targets as large or larger than Microsoft, a perennial target of EC investigators, and Intel with its commanding position in the microprocessor marketplace is a logical candidate. AMD is the main player in this, as it is the one with so much to gain. Although Intel and AMD have had legal skirmishes in the past, Intel has generally be able to outflank the arguments of its smaller contender, and until a couple years back, had been rather successful in keeping AMD marginalized, or viewed as the low cost, or “other” provider of x86 technology. With Opteron, AMD became a much more visible thorn in Intel’s side and despite slow adoption of its compatible 64 bit computing by a certain large ISV in the Northwest of the US, has been voraciously eating away at Intel’s market share. As this has unfolded, Intel has done its best to keep its customers in line, whether it be through attempts to staunch the 64-bit dilemma by releasing EMT-64, or through various less public means, including the ways in which it prices chips for it various large volume customers and provides co-marketing dollars. It is these types of actions that AMD contends are unfair/illegal.

While we have no insider knowledge of any of these actions, we are observers of the industry nevertheless. When Opteron was launched it was very interesting to see how traditionally strong Intel partners reacted. IBM, for example, embraced Opteron early on, but did an interesting dance and branding two step as it pigeon holed the Opteron as a HPC only solution, and branded its servers outside of the xSeries, the official home of Intel compatible architectures. Microsoft made announcements that it would support Opteron, but then didn’t release Windows Server code until April 2005, many, a moon after Opteron’s general availability. Similarly, HP announced Opteron servers, only after Intel announced EMT-64. Ironically, it was Sun Microsystems, the traditionally SPARC only shop that took the greatest interest in Opteron and was quick to promote all the glory of the platform.

So where does this leave us? By 2006, the market has warmed considerably to Opteron, and even Big Blue straightened out its thinking re: Opteron. Has Intel engaged in anti competitive practices? We are sure the EC will tell us their view, but from the viewpoint of market watcher, it would take a credulous observer to not wonder why the big players acted the way they did back in 2003 and 2004. For an agency that is besides itself with reaction to Microsoft Vista, some of apparent machinations of Intel would seem hard to dismiss without a very close look.

October 01, 2006

Silicon Valley Gets Wired


Silicon Valley is set to become a free wi-fi hot spot.  The open network Silicon Valley Metro Connect will offer universal broadband wireless Internet access to all Silicon Valley users covering 42 municipalities and nearly 1,500 square miles. Beyond providing wireless access to the general public, the network will also be capable of supporting a broad range of uses by residential, small business, public sector and commercial users. The Silicon Valley Metro Connect team, including companies such as IBM, Cisco Systems, Azulstar Networks, and SeaKay, is offering a combination of innovative technology capability and public benefit. Silicon Valley Metro Connect will build the network based on the latest Cisco Systems mesh wireless infrastructure technology, with a technology upgrade program to ensure long-term network vitality and scalability. IBM will provide network design and integration services, as well as technology applications for public agencies and local utilities. Azulstar Networks will act as the network operator for service provisioning of the 802.11b/g base wireless network. SeaKay will work with municipal and public benefit agencies to customize the network to their needs, and will also spearhead outreach and digital inclusion programs to meet the economic development and social benefit objectives of the network. The social benefit objective include providing an alternative communications medium to first responders -- fire, police and emergency medical -- when traditional communications systems may not interoperate, enabling healthcare workers to be able to access information wherever they are, and students can more easily engage in learning beyond the classroom. Silicon Valley Metro Connect is a privately owned and operated network which will be supported by a sponsorship format.  The goal of sponsorship is to ensure a diverse stream of revenues, hopefully so that the network can weather changes in technology and the economic environment over time. The wireless network will offer up to 1Mb data speed for the free foundation service, and comes with built-in protection of user privacy.  It will also include digital divide programs for economically disadvantaged users. For those who wish to upgrade, premium fee-based services such as wireless VOIP and video streaming will be available. Beginning in 2007, the Silicon Valley Wireless Network will leverage the WiMAX IEEE 802.16 wireless standard for the 2-11 Ghz operating bands, to offer greater throughput for mobile and fixed users and higher quality service for video, voice, and data.

Internet access for everyone is all good, but what about security issues?  True, any illusion of privacy on the Internet is a pipe dream, but wi-fi is not only still not as secure as one would hope, but any hacker can just load up their van with equipment and drive around the metro area until they find something they can use.  And are there any protections from Big Brother?  It’s not quite clear from the announcement what the “protection of user privacy” really means, but the parameters of that user privacy should be made clear to everyone signing on, every time they sign on.  Still, there is an element of “caveat emptor” for everyone in cyberspace these days, and any person who wishes for true privacy and anonymity should most likely stay off the Internet altogether.

Then there is the issue of the Silicon Valley Wireless Network competing with local businesses.  What would the local internet cafes and wi-fi hot spots have to offer if the population is signing on for free, besides the essential coffee?  Local government competing with local businesses is usually a very bad idea, not least of which is because they are undermining their own tax base.  But perhaps the wi-fi hot spots and internet cafes can offer faster connections, or more security, or support for hardware and software issues, rather like an Everyperson’s IT Support Group.  One thing that is not in any shortage in Silicon Valley is creativity, so we expect the small business owners will be able to adapt and perhaps even thrive, in spite of the new competition.

September 28, 2006

10,000 Itanium Apps Now Available

Earlier this week, at the Itanium Solutions Summit, the Itanium Solutions Alliance announced the availability of 10,000 applications that can run on Itanium 2-based systems. This represents 50+% growth in the number of available applications since the Alliance was formed last year. The Alliance also noted that Oracle announced that it will work with the Alliance in its certification of Oracle software on Itanium-based platforms. At the same, they quoted IDC numbers showing Itanium-based systems grew about 36% year over year (Q2 2006) and that Itanium-based platform revenue is roughly 44% POWER-based server market share, and roughly 45% of SPARC-based server market share.

When this allaince was first announced last fall, the requisite cadre of card carrying members was out and about briefing the analyst community on the Alliance’s vision of grandeur and how soon Itanium would overtake all server platforms of relevance. Roughly a year into things, Itanium has seen decent growth in the marketplace, and the alliance is more than likely one of the reasons this growth has occurred. As part of the analyst outreach last fall, there was the implicit and sometimes explicit statement the unlike other chip based alliances, read power.org, this alliance was “open” and represented the interests of many vendors, not just one. Of course, there are inaccuracies in this assessment, as power.org is about an architecture, not a chip, and it has dozens of vendors involved, not one, and it is far from “closed.” Nevertheless, after a year, we see Itanium measuring itself as having acheieve 44% of the market share held by POWER and roughly the same for SPARC. The message seems clear, we are better than those guys, we will overtake them, and the world will look like the place we want it to be. OK, perhaps, but perhaps not.

10,000 applications is a decent number, especially when considering that they are server applications. This is a far cry from the much smaller level of industry support evident just 2 or 3 years past. This is good news for organizations that have either been ordered by their systems vendor to embrace Itanium or for those genuinely looking for an alternative. While one might realize this is a fraction of the applications available for other platforms, it still demonstrates a growing commitment to the platform.

One thing that bugs me about vendors, their alliances, consortia, and other forms of industry coupling, is the urge to define oneself in terms that claim superiority to others in a very relative fashion. For example, we are 44% of POWER's marketshare, or we are growing faster than them. So, is Itanium only 44% of what it should be? This dredges the memory of 1982 Ford automobile ads where the announcer assured the audience that these new Fords were 50% higher in quality than the ones made just 2 years earlier. How did that make a 1980 Ford owner feel? Is the fact that the now 10+ years in the making Itanium has now garnered less than half of the market share of either of its main non-Intel competitors in reality all that impressive considering the number of competitive platforms that were shot by their owners who ran to Itanium to solve their market share woes? Of course, all of the focus on the “other guys” allows vendors to overlook some more pertinent issues, such as what is your identity? Comparisons are fine, but without clear definition of one’s self, it’s all about the other guy. Talk about a codependent relationship in the making.

Nevertheless, we are seeing growth in Itanium sales and the Alliance should take ample credit where it is due. But at the same time, I wish they would be a bit more honest about what they represent. No matter how often or how loudly they say it, Itanium is NOT an industry standard platform, NOR is Windows an open platform (as was claimed at the press event). Windows is a de facto industry standard given its large installed base and thousands of manufacturers providing the hardware on which it run. Itanium is a single vendor sourced chip that lacks any dejure standards bodyrecognition (other than Intel itself) nor does it have wide spread de facto deployment in the marketplace. Call the spade a spade and be truthful about it. Proprietary solutions are not inherently evil, but openly promoting one as something else is disingenuous, and one of the evils of marketing.

September 13, 2006

50 years of the Hard Drive

Last night I attended an event at the Computer History Museum in Mountain View that was celebrating the golden anniversary of the IBM RAMAC 305, the first hard disk drive storage system. While the event seem well attended by various industry luminaries and at a least one TV station van, having my 9 year old son alongside me that evening helped keep everything in perspective. The museum is a gem in and of itself, and the fact that I have worked on equipment from many of the eras being preserved only served to remind myself that I have been around this industry a lot longer than I care to admit. Of course, having my 9 year old looking at an adding machine from 1925 and asking me if it was an early model personal computer only goes to show how far this industry has come, as well as how much of its innovation we take for granted.

One could wax poetic about the changes witnessed over the past 50 years, when San Jose and all of Silicon Valley was the Valley of Hearts Delight and grew some of the best fruit available on the planet or when getting to San Francisco from San Jose was best accomplished via the Southern Pacific Railroad, not the Bayshore Freeway. Nevertheless, just as IT ultimately changed the valley, the hard drive forever changed IT. Besides some home appliances, there are few products on the market today that still employ much the same theory of operation, if not simply miniaturized versions of same implementation, as they did five decades ago. The disk guys got this one right, but did they ever foresee the PC, laptop, or iPod? Probably not, they were more likely looking to invent personal jet packs that would fly you to San Francisco faster than the Coast Daylight and for less than $1.70 one way.

The hard drive changed the fundamental value proposition of information from being a serial listing of data, much a like phone book that can only be read forward from page 1, into a random access method, more akin to a Rolodex. From this point forth, we have never viewed data in the same way, and as a result we have digitized practically ever piece of information possible. From this innovation there have been many positive commercial achievements, most recently of course the phenomenal explosion in personal music players, and much larger storage devices in laptop and personal computers, not to mention massive disk storage arrays at plummeting price points. From this, whole industries such as genomics have been created or at least radically altered, and of course the nearly ubiquitous downloading of music, video, and other content by anyone under the age of 30 onto the latest iPod, phone, or not yet named gizmo. But with the positives, come the negatives, and it is incumbent on each of us to remember to use technology for good, not evil.

One of the evening speakers from a large disk storage vendor made comments about how wonderful it would be when every event transpiring in the range of a micro camera would be economically recorded and stored. He noted examples of being able to see your child’s first step, even if you were not there, or capturing images of suspicious persons doing questionable things around your home and alerting your alarm company — these are all positive applications of stored information.

But the cheerleading around capture and storage everything all the time is just too Orwellian for me. Constant monitoring and recording of our every move could be a very likely and not terribly expensive scenario in the near future. But to me, the cost of this mindset is enormous. The UK is a great example of better security acheived through the use of cameras, but as one of those treasonous colonists, I am not thrilled to think of a future where every public action I take will be recorded in perpetuity. As much as I would love to capture forever all of the important moment’s in my children’s lives, I would much rather know that the world they are going to experience is not one in which they will lose all privacy and personal rights. In this time of paranoia that lets us rationally decide that carrying bottled water on to an aircraft is a precursor to a terrorist act, it behooves us to extensively consider the impact of our actions, especially when extolling the virtue of recording everything all the time. If this is the mindset that we have achieved through technology, then I will call up the Luddite society on my mobile phone and ask them to email me an application.

But all my Sci-Fi paranoia aside, 50 years of the hard drive is remarkable, if not unprecedented. As we look to the future, I wonder if my son will return to the Computer History Museum when he is 59 and gaze in amazement in what the last 100 years of disk technology has brought, He might find it really odd that I thought a 1TB 2.5 inch HDD would be cool, as he looks at relics of 10TB thumb drives and what else may have come. But then again, he might stop and ponder that 1925 adding machine once again, and remember the time when he first saw it, and how everything that was around it was so cool as well. Happy Birthday RAMAC.

August 15, 2006

Sun, Java, and Open Source

Sun Microsystems held a brief update on its Open Source initiatives for Java yesterday evening, across the street from Moscone Center, where Linux World is taking place this week. The event was very a informal “checkpoint” (Sun’s lexicon) on the progress of the open source process for Java. The company reiterated its commitment to bring Java into the Open Source community in substantial parts during the later part of this year with all of the code being released into the community by the early part of 2007. A consistent message from all of the presenters was the company’s focus on developers and customers, with the traditional tipping of the hat to binary compatibility, and maintaining a single stable standard for Java, and other thoughts consistent with the Copernican company’s long held views. And happily, the presentations were mercifully short (as promised), and the bar, snacks, and executives were then unveiled to the eager masses.

At one level there was not much shocking revealed, but then again, no shock treatments were promised, and Sun seems more or less on course for what it indicated its open source plans are. But for someone who has followed the company for longer that I care to admit, there were some differences in the new reality of Sun that are in sharp contrast to the company of a decade ago. There is considerable irony in the timing and location of this update, across the street from the show that promotes what has become the universal cross platform operating environment, namely Linux. Of course, those long enough in tooth recall very similar aspirations once placed upon Java by its corporate father.

Sun was methodical in its statement of its desire for transparency in what it does, and how Java is ultimately licensed, and has invited comments from the community at large to this end. This is a markedly different approach than it took in the late 1990s where we witnessed Sun attempt to drive its view of a Java standard through the ISO and other standards bodies, in a failed attempt to dictate to the market just how standards are achieved and who would control Java’s. Now, the company reminds us that of its belief, that it was the original open source company by citing its work on BSD UNIX and NFS, while overlooking its many single source focused behaviors of the kill Microsoft or die trying era.

There has been a very public shift in corporate leadership and perhaps the company will undergo a sufficient reinvigoration to be able to take a leading place in the market once again. However, it does cause us to wonder just where the company would be today if it wholeheartedly led the market changing notion of Open Source in the late 20th century with the zeal it proclaims today as opposed to reacting belatedly in the early 21st century. What a different market we might have seen. Sun, the company that so many times successfully reacted in advance of the market to reinvent itself, fell victim to its own success, or what some would paint as arrogance. Nevertheless, in 2006 we see a humbler company taking what in our point of view is much more realistic approach in its drive to win, or in some cases regain, the mindshare of the developer and IT purchasing communities.

July 19, 2006

Open Source and Malware

The first edition of McAfee's Global Threat Report is out and discusses malware and open source amongst several other security focused issues. One of the findings of the report is that malware authors are adopting open source development models to aid in the development of more potent threats as these IT social malcontents seek to inflict their Trojan Horses, spyware, and other wasted use of application development talent on the public.

OK, we are not surprised to see McAfee trumpeting its latest of assessment of the darker side of Internet live, after all, its business premise is one based upon being an immunizer and triage specialist for the IT community afflicted by the delinquent behavior of some. However, the revelation that social reprobates of the software-coding ilk would copy the best practices of legitimate developers should hardly be surprising. This is akin to being surprised that organized crime would use the same tools as legitimate business as they seek to ply their illicit trade and engage in other nefarious activities.

Some with disingenuous goals might take this opportunity to imply that closed or vendor sourced code is somehow inherently superior to that of the open source community. Of course when contemplating this thought, it is imperative the thinker remember that the earliest and ever since bulk of viruses, spyware, malware. Trojans, have all targeted vendor sourced solutions. The fact, unfortunate as it may be, is that all software is developed by flawed coding machines (human beings) that are prone to make mistakes of omission by not knowing every conceivable malevolent possibly in the misuse of the code which they are so diligently creating. Just as in biology, a life support system and food supply and encourage beneficial life forms as well as less than beneficial.

There is no magical balm in either open source or vendor source that will prevent malicious code creation. The only way the dark code stream will stop is if the individuals responsible have a change of heart and become productive net citizens. However, absent divine intervention, this is unlikely to happen. Sure, improved software will help slow down the threat, and security precautions are great tools, but we are naïve if we believe that the scourge of malware will be extinguished any time soon short of simply turning off all computers. It behooves us all to accept the reality of where we live, be vigilant, take reasonable preventative measures, and not fell prey to reactionary thought in the process.

June 29, 2006

Congress and Its Amazing Ideas

By Susan Dietz

Protecting our children is a noble and much-needed goal, and it’s true that the Internet is a breeding ground for predators.  Action should be taken to protect the most vulnerable among us from becoming victims.  However, the US Congress is allowing fear for our children to drive short-term behaviors and is tackling the problem of protecting our children with a slew of proposed measures ranging from the laughable to the downright scary.  Some ideas include labeling web sites with a rating system similar to the movie rating system, forcibly blocking off-color web sites, surveillance of American surfing activities, making certain hyperlinks illegal, recording which customer is assigned which IP address, dispatching “search and destroy” bots to disrupt peer-to-peer networks (and the computers which use them), restricting webcams, recording email correspondence and web sites visited for a period of 18 months, making search-engine providers responsible for offering questionable or illegal web sites with any search, letting government bureaucrats rate chat rooms, subpoenaing internet providers, and granting censorship rights to federal bureaucrats rather than judges.  

Constitution?  We don’t need no stinking Constitution!  Surveillance and monitoring without a warrant was, the last time I checked, still illegal in most circumstances (considering the Patriot Act), as is usurping the power of judges to shut down illegal web sites.  Destroying someone’s home computer with a “search and destroy” bot is also beyond the pale.  Despite reassurances to the contrary, it is almost a guarantee that innocent computers will be targeted.  Those things happen, and an “oops, sorry” from the government just wouldn’t be enough to repair the damage and disruption to someone’s life.  And as for bureaucrats monitoring chat rooms?  Exactly how long, then, until our tax forms come complete with emoticons?  Say we decide that we can start acting like Singapore, or worse, China.  Soon, all that is read is newspeak and the digital rewrites of history will be automatic and immediate.  It’s a rather Orwellian specter to contemplate, but one which the current administration seems to favor.  The Internet isn’t the only arena in which the message to the American public seems to be, “anything we don’t like, we will make illegal and also work behind the scenes to block you from it, just in case you decide to try and break the law.  Remember, we know who you are.”

Let’s set aside the issue of the legal and moral implications of some of the proposed measures.  Punitive and reactionary “solutions” are often the most expensive and least enforceable actions to take, and history is rife with examples.  The storage capacity alone needed for one of the ideas – keeping a record of email correspondence and websites visited for each person in the United States for a period of 18 months – is mind-boggling, let alone the personnel needed to monitor the monitoring.  Who would decide what is legal and what is deviant, anyway?  One person’s porn is another person’s art, and some images are not easily defined as belonging in either category.  On the technological side how, exactly, would some of these wished-for measures be enforced?  Some proposed technologies simply aren’t available, or aren’t available on the scale needed to enact the envisioned scenario despite the fertile imagination of some members of Congress.  Perhaps one day these technologies will be able to be implemented, but until then, put down the remote, back away from the Star Trek reruns, and think before acting.





June 13, 2006

The Channel and IT Solution Delivery to the SMB Market

Clay Ryder has made some very good points in his article “When You Are Too Small to Be a Named Account”. However this is a very complicated area as Joyce Becknell indicated in an article some months ago. It is one that it is very clear that the majority of IT vendors do not understand.

The first of the common mistakes made by vendors is to assume that the potential SME customer base forms a single entity. This is patently not th case on so many levels. Small and medium size companies, even many larger enterprises differ not just by their geographical base or vertical industry sector. The main differentiator concerns their purchasing habits and their abilities / capabilities to manage IT.

To reach these organisations the vast majority of vendors typically utilise a channel based sales approach. However once again it is certain that the organisations in the various channels, for there are many, differ enormously both in their ability and comfort to deliver some of the more complex IT solutions being targeted at SMB customers.

The one thing that is true of almost all channel based organisations is that they do not see it as their role to “build” new markets and to “educate” the potential user base on the business benefits of the solutions becoming available. The majority of all IT channel organisations are in the product / service delivery business. Unfortunately a number of the major IT solution suppliers miss this important point which may result in poor, or at least slow, adoption of new IT solutions

Meeting Centeris – Discussions on the Mid-market


I had the opportunity to have a phone briefing with Barry Crist, the CEO of Centeris today.  They’re a small company, based in the US Pacific Northwest, who’s just starting to expand into Europe.  They’ve got a product, Likewise that helps you manage Linux servers in a Windows environment.  We had a good conversation about life, the universe and everything, as Centeris is hanging out in Germany this month – the European epicenter of all things Linux – so we got introduced and I discovered an company I wanted to share with you.

This is a good company to watch.  It’s founded by a bunch of ex-Microsofties. What a great concept – Microsoft-minded people who want to work with Linux rather than thinking it should be obliterated.  There is far too much polemic about Windows versus Linux in the greater IT community which is a waste of energy.  Both platforms are going to continue to thrive and have their purposes in life. There are areas of overlap where sales teams will slog it out, but the decision to use Windows or Linux need not be binary.  What is really helpful to end users is tools that help them make the systems integrate.

This got us talking about the mid-market.  One of the ongoing issues for mid-market companies is that they don’t have the leisure of hiring IT specialists – they tend to hire good generalists who can handle a bit of everything.  That means they need to find tools that are cost-effective, quick and easy to install and targeted toward them.  Barry feels very strongly about the needs of this market and we commiserated on the challenges this market faces.  We didn’t have enough time to solve the world’s problems, but they’re on our radar screen now and we welcome them to Europe.

June 07, 2006

IT and User Behaviour

Should IT administrators, IT Managers and CIOs be legally responsible for monitoring the use of IT systems by users? More especially should they be held to account for users not following company standards and national laws governing the use of keyboards, mice etc. There have been many cases in Europe whereby users have taken their employers to court over many things. Some of these are more than justifiable where organisations flout the laws. However, occasionally a case arises where a user sues their employer because they, for instance, used their PC for many hours a day without a break. Now if the employer is forcing this work pattern to take place they should be sued but if the user does so without any coercion, who should take responsibility?

The company has a duty to ensure that its staff work in accordance with the law and should take steps beyond this to educate its employees on best practice working conditions. However the question then arises should the employer then arrange to monitor the work pattern of individuals to point out when they are not following guidelines? Or is user privacy of greater importance?

May 30, 2006

The Processor Market – Confusion or Ennui?


Joyce Becknell
The processor market used to be relatively simple.  Customers mostly chose RISC or CISC and then purchased the fastest system they could get within their budgetary guidelines.  But now processors come in multiple forms – there is the underlying architecture still – CISC, RISC, and EPIC (Itanium) – but there are also issues of multithreading, mult-cores, and issues around cache sizes.  Understanding which technology to use, and how to compare vendors using variations of nomenclature has made processor decision-making a trickier undertaking.  

I believe that the vendors are not helping to simplify issues in this market, and that this confusion far from being irrelevant technical detail in the end, will have an impact on how companies purchase not only hardware, but software as well.  I’ve written a paper on this recently that delves into these issues more deeply, but I’m wondering how much IT managers really care, and whether they feel they understand the impact of changing processor architectures to their IT infrastructure.  I’d like to know if it makes a difference to customers, if they feel like they can make valid comparisons between vendors, and if it’s affecting how they purchase software.

Tony Lock
The issues raised here by Joyce do have relevance in the everyday world of CIOs, IT Managers and those charged with the procurement of business systems. The decisions taken can have profound impacts on systems performance and the cost of service delivery. Unfortunately it is fair to say that the majority of professionals making processor selection tend today not to consider chip architecture at all; most people do not have the time to consider these questions.  However, the processor architecture selected can impact directly on the quality of service delivered to the customer / user, which software architectures work effectively, the cost of software license acquisition and a number of other matters.

This is an area where confusion reigns, often instigated by the processor designers and the server manufacturers making use of the chips, but as in nearly all such cases, the confusion created helps no one, least of all the business users. Indeed, it is fair to say that the lack of understanding of the different processor characteristics does not even help the suppliers themselves.

May 23, 2006

Microsoft Updates the Search Engine for Enterprise

By Susan Dietz

Microsoft recently announced their effort to create unified enterprise information management solutions that integrate new capabilities into the software programs people already use. The new software will purportedly help people create, find, use and share information across the organization.  Enhancements in Windows Live Search and Microsoft Office SharePoint Server 2007 are aimed at increasing connectivity within an enterprise.   The reported goals of the enhancements are to give people the software tools to report about their knowledge and projects, improve productivity by quickly, seamlessly and securely connecting people to relevant information, enable people to organize and manage information so they can effectively analyze and apply the data to do something new, and allowing people to clearly communicate and quickly share information with other people.  Windows Live Search offers a single user interface (UI) to help people find and use all the information they wish from across the entire enterprise and beyond. It essentially binds together previously separate search solutions including Windows Desktop Search, Intranet search provided by Microsoft Office SharePoint Server 2007, and Internet search via Windows Live Search, among others. Information available to any of these systems can be exposed in one place.  Rich filtering and customizable control will allow people to personalize their Windows Live Search. Using natural search terms, Windows Live Search can return results in whatever way makes most sense to each information worker – inline, grouped by category, etc. Previews and visualizations of the data can then help people more quickly determine what action to take.  Office SharePoint Server 2007 adds a new dedicated Search Center tab for searching for people within the organization based on “social distance.” 

So who actually wants information across an enterprise in the first place?  Auditors, for one.  They need to audit trail all kinds of things, especially in terms of SOX and the like.  Lawyers are another group - looking for electronic evidence just got a little easier. Discovery searches across all the places where data may reside is not necessarily a good thing or a bad thing, but it is, unfortunately, a necessary thing.  But what about the possibility of information overload? Suppose someone in a large company wants to know about the sales of, say, anti-virus software, so does a search.  It’s absolutely mind-boggling how many emails that would conjure up.
 
A bigger question is does this new development now make Microsoft a friend or an enemy of established search engines Google and Yahoo?  Does this technology perhaps signal Microsoft's aggressive movement into the space these two occupy?  Perhaps.  However, given the dominance of Google and Yahoo, there may well not be any significant loss of market share.    


May 17, 2006

How Analysts Can Help

One of the topics that come up time and again is how industry analysts should be used.  Many people think of us merely as report writers or number crunchers, although there are certainly quite a few companies who understand how to use us and do so.  Analysts often engage with vendors, channel partners, and end users in ways that generally aren’t seen publicly.  We’re involved in a lot of internal work, and a recent article by David Pogue, in his New York Times Circuits Column, that talked about Microsoft and the problems they’ve had with UMPC got me thinking that this is a really good example of where analysts can help.  

The gist of the article is that Microsoft’s UMPC is inappropriately priced, and Pogue puzzles over how this could have happened in a company full of people who ostensibly understand their market and their product.  Pogue concludes that this is a problem of human nature, and he recalls similar experiences in his work life where employees didn’t raise issues that in hindsight they should have done.  When I read this article, the first thing I thought was, well it seems they didn’t talk to the analysts.  I don’t work with the UMPC group in Microsoft, but if this group had run pricing by analysts before launch, I guarantee that flags would have been raised.  

Analysts don’t have a vested interest in a company in the way employees do.  Sure, we have companies we like better than others, and we have products and spaces we like better than others.  But if asked, we will certainly point out what we think is good and bad about a product or program without hesitation and most of us believe that if we’re going to critique, than we’d better have alternative suggestions available.  This is why we argue strongly that analysts should be brought in earlier in the product development and launch cycle than later.  The sooner we can add our insight, the easier it is to make changes and avoid bigger problems down the line.  Analysts certainly can’t know everything – our information for a new product is necessarily limited to what the vendors can tell us – but we can certainly add to the process based on our experiences.

In fairness, Microsoft does engage the analysts in many areas, as do many other companies, and if we do our jobs correctly, most customers will probably never be aware of our impact – we don’t discuss these engagements publicly nor do we write about them except for internal client reports.  But for companies or individuals who aren’t sure how to use analysts, or who think of us as just another branch of PR for outbound information only, this is a good example of how analysts can help.

May 15, 2006

How many Security systems do you need to feel safe?

Tony Lock:
Every CEO, CIO and IT Manager always puts “Security” close to the top of their long lists of IT related matters about which they are concerned. It is, however, interesting to note that in many instances “concern” does not translate into either action or investment. Why should this be so when the whole topic of security is rarely out of the news and at a time when the marketing of “security” solutions has never been more aggressive? A much more important question is ‘how many security related systems do I need to employ to make my IT infrastructure safe?’

The answer to the last question raised is two, neither of which is a technology. The secret to running secure IT systems demands firstly that people with appropriate knowledge and experience be given sufficient time and scope to understand the security requirements of the business and to determine the impact that these will place on the supporting IT Infrastructure. The second step is then to formulate appropriate work procedures, preferably based on industry best practice, to form the basis of routine operations.

If these two steps are taken, the organisation will then be in a position to decide just what technology solutions are required to support its security efforts. Security is all about doing the right things at the right time. It is not about having the latest, greatest piece of security software or appliances installed. The greatest security technology in the world will not secure anything unless it is administered well. Technology has a role in IT security and that role is to support good practice.

Joyce Becknell:
Tony has touched upon some really important points here, and I want to explore them a little more in depth. He believes that security staff needs to understand both the business requirements as well as the impact on technology – which is different than saying they need to know how to work the security into the technology. I think what’s important here, is the implication that these must be holistic, or architectural, or systemic people who are looking at the picture from a higher view and not getting caught up in the minutia. It seems to me that as an industry we understand that no one product is going to provide the consummate organizational security blanket, but the only alternative seems to be lots of point products oriented toward very specific technologies or developing a staff that understand the intricacies of encryption and legalese.

This seems to me a much easier thing to say than to do. I think there is an underlying issue here about the relationship between the business staff and the IT staff in any organization. As in all things, different organizations approach this differently, but it would be good to see some best practices emerging on how to approach this. There is a tendency to use financial institutions as an example, but I think they are not the norm that most organizations will use. However, it does segue nicely into Tony’s second point, that in addition to people who can view the overall picture, at an organizational level, processes and disciplines need to be put in place around that security. Which leads to the question- should most companies be focused more on people and processes now than on particular security products? I think most organizations are suffering security breaches now because the people and process side are not as advanced as the products they have in place.

March 31, 2006

A Vista with no View of an EU Sunset

The European Union's competition regulator has warned Microsoft it will not be allowed to sell the new Vista operating system in Europe if it comes pre-loaded with certain features. The antitrust commissioner’s office has stated that it is concerned with Microsoft's plans for Vista's integrated internet search, DRM and document management software. Separately Google, Symantec, IBM, Sun, and Oracle have stated that they are concerned Microsoft could use its Internet Explorer 7 Web browser to unfairly direct computer users to Microsoft's own search service or use DRM to lock up documents in such a fashion that non-Microsoft office productivity applications would not be able to read the files. The genesis of the letter from the commissioner was that Microsoft had asked EU regulators to set out any Vista concerns it may have. (Some advice to Microsoft, be careful what you ask for, may just get it.)

These days it seems whenever Microsoft twitches, the industry perceives the quaking steps of a giant, and EU regulators swing into action to protect their political base from the software malfeasance they see being perpetrated by the Redmond Goliath. Regulators’ phones and email boxes quickly fill up with inbound messages from industry competitors almost always too happy to assist regulators in understanding the gravity of the situation. Although much of the discussion will entail how regulators are just looking out for the best interest of consumers and the market as a whole, after nearly a decade of this lost battle of the marketplace being replayed in the courtroom, it at times becomes difficult to understand how this is really going to help the bulk of the non-geek, non-ABM, non-I-have-an-axe-to-grind-daily crowd. Note that this previous sentence segregated out the digital literati, vendors not named Microsoft, and those with serious cash flow envy.

With Windows 95, the first consumer-based Internet oriented operating system along with Office 97, the first Internet oriented office productivity suite, Microsoft made the conscious decision to make it easy to access the Internet and share information, in part due to customers’ demands that software be easier to use. Remember DOS 5 or Windows 2.1? In some respects, this is where much of the trouble started. Microsoft obliged its customers, unfortunately, this ease of use came at the price of social reprobates exploiting this ease of sharing to develop a cornucopia of viruses, bug exploits, Trojan horses, and other just plain bad stuff. Anti-bad guy companies such as Symantec, McAfee, and others came to the rescue with the malicious code police and Microsoft tightened up the ease of use to point where seemingly receiving an email required clicking OK multiple times if it had any images, code, links, or any of that what makes it easy to share information stuff in the email. What a great solution this is for end users.

Consolidation of the once very disparate worlds of the LAN and the Internet through integration of Explorer and Internet Explorer gave users a unified view of the information resources that they were trying to share. Of course Navigator and HotJava aficionados didn’t like that, but Grandma, techno-phobes, and mere mortals largely did. Yes, let’s not forget to complain about all that unfair competition against Netscape. That competition started prior Netscape’s commercial existence when Microsoft indicated it would build a browser into its OS back in 1994; back when Mr. Andreesen was busy creating his browser over at NCSA. That unfair competition that took the form of Microsoft helping Netscape along as an ISV until such time as Andreesen and company started bashing Windows publicly as an irrelevant collection of device drivers. Is it at all odd that the Redmond crew decided to be less than helpful to Netscape going forward? And just who is irrelevant today? But, I digress…

Now desktop users are crying out for stronger security in their OS to protect them. Pontificate as you will about Windows’ security flaws, but also note that Microsoft has responded with firewalls, software updates, and a future with anti-spyware and DRM to enhance security. Of course this torques off the competition who want all of this security to be third party. It seems that some will only be placated if courts dictate that Microsoft must deliver software that is sufficiently unprotected so that third parties can malign it and then sell fixes and protection for it. “Hey that’s a nice operating system youse got dere, it would be shame if it was to be infected with viruses and malware and we brokes in and stole youse files.”

Businesses have long complained that they are required to become an IT center of excellence in order to use technology, thus diverting them from their business core competencies. Why should consumers be forced to do the same? Some will claim that this integration comes at the price of third party software having a harder time getting sold since Microsoft will bundle everything. In many cases this has been true, but it is also what many in the marketplace have been asking for, something easier and safer to use. We have heard endless harping about how open source will change all of this and stop this travesty known as Windows, or Office from being perpetrated on the marketplace. Well, until Novell announced SLED 10, the thought of mere mortal dumping commercially integrated and supported software for a collection of do it yourself technologies was laughable. SLED 10 by the way, is offered by one of those evil commercial software vendors who are trying to make some money in the marketplace. From what we can tell, they have done a damn good job of creating the first potentially competitive Linux desktop for the non-geek. But this is another digression.

There is more than enough blame to go around on how we as an industry and user base have arrived to where we are at. But many, including the EU regulators are missing the point. Huh? We are fixating on a platform with diminishing importance. No way you say! Well, consider this. The arguments about Vista are predicated on a platform (desktops and laptops) that will continue to diminish in proportion to the totality of information access devices (PDAs, phones, iPods, game consoles, to name just a few) deployed by consumers going forward. Further, the hardware requirements to run Vista will eliminate all but the most recently purchased systems or require substantial upgrades just to boot. (Microsoft is creating a great opportunity for Novell here, Jack Messman should send Steve Ballmer a bouquet of flowers and a thank you card for this one)

We don’t see equal outrage that most telephones cannot be taken from one mobile supplier to another in North America. Where is the demand for ease of third party software installation on the iPod, PDAs, phones, game consoles? Yes, France is complaining that iTunes should sell music in different formats to spur competition (this is coming from the only western country using SECAM TV standards). Nevertheless, these other devices are the consumer platforms of the early 21th century. So while regulators seem hell bent on defining what Vista will be, the true growth area of the market seems to be overlooked in the process. To us they are saying “Let’s try to fix the past while ignoring the future.” This way we can repeat the cycle and maintain perpetual employment for those with nothing better to do. Is this really the best we can do for and as an industry? Sigh.

So, I have vented my spleen. No I don’t own any Microsoft stock, they are not a big customer pumping gadzillions of dollars into my company, nor have we even received a free copy of Vista. Many of you will think we are representing the devil, others will think we are just nuts. If we have prompted you to think, that’s good enough for me. Sageza would like to hear your thoughts, whether you agree or disagree, just be sure to spell our name correctly.

March 13, 2006

Why we won’t be talking about Open Source in the future

Being a good analyst, I attended IBM’s Open Source and Linux analyst event last week. Although it was predictably on the east coast, which means a good west coast guy like me got to spend more time in airplanes than at the event itself, I nevertheless made the trek to hear from Big Blue. While some of the issues discussed and positions taken were predictable, not in a bad sense, just that they were consistent with past communications, I started having of sense of déjà vu, with the wayback machine pointing to 1996. After asking, Mr. Peabody reassured me that it was in fact a decade later, it all started to come together for me. So just what in the world am I talking about? Read on.

There was much talk about the specialness of Open Source, not necessarily so much from IBM, but from others in attendance. While there are religious devotees who believe that the most important role of Open Source is to bankrupt Microsoft, there are many who are not on the Redmond attack squad, however, that talk about Open Source as if it remains somehow discrete, or fundamentally different than other software. Then it struck me, well, OK it struck me about 2 hours into the overbooked flight home in seat 23C when I ran out of munchies that I pilfered from the snack table at IBM event. This bifurcation of Open Source from all other software was a very familiar behavior – it is the same one that ten years ago insisted that the Internet and related technologies were discrete from IT and the datacenter. A ha! As I am fond of saying, there is little new in the world, all things old are new again.

While people have the need to segment out what is new and/or different from what they know, the reality is that from a bits and bytes perspective, open source software is no different than any other. It is code that runs on the machine and hopefully solves a problem and delivers value to the end user. The development model and the pricing model vary, as do issues related to intellectual property and ownership, but at the end of the day it is just software. In 1996 we might have asked if a given piece of software of equipment was “Internet capable”, but today, the question is never asked as networking is just an assumed attribute. Some beg the question, is this software Open Source compatible? One the surface it may seem silly, but the real question is will this software be well behaved, standards compliant, and not break my IT infrastructure? (or get me fired.) Does this mean that all software will be open source in the future? This is not very likely, but the need to somehow distinguish Open Source from all others, I believe will diminish, if not vanish altogether.

Many years ago I postulated that Java was a priceless technology -- priceless in the sense that in order for it to be successful, it had to be ubiquitous, and in order for it to be ubiquitous, it had to have a price point of nearly zero. We can argue for quite some time as to whether Sun was successful in this regard, but when we look at where Open Source is thriving, it is in delivering the minimal level of functional foundation required on which discrete can then be added. Whether this is file or print services, database, basic management etc, these are all priceless technologies as well. In 1994 the TCP/IP stack and utility business for PCs was probably about a $700 million affair. In 1995, it nearly evaporated when TCP/IP begin to be natively supported in Windows 95. Thus the stack became discreetly priceless when it was bundled into the ubiquitous OS. In many ways, Open Source software is making the same demands on the marketplace – these technologies are priceless, therefore stop trying to make money them, but instead invest those same dollars in adding value on top of the priceless technology. As a result, freely distributable, standards based, basic technology will be a given, let’s learn innovate on top of it, where the real value, and may I add, margins, will be found.

From this vantage point, the future would be time where open source technology will be prevalent, found by itself, bundled into commercial solutions by business partners, and distributed and supported by major systems vendors and ISVs. It would provide key building blocks that are vetted by the large constituency and seek to ensure its widespread availability. High margin, differentiated, value added solutions would be built on top, that delivery the high value sought by the customer, combined with the support and backing of the commercial software industry. Heresy? No way. Software communism? Negatory. A smart way to focus an industry on value add and innovation? Absolutely.

So, let’s assume that the industry as a whole realizes the value in this approach. What would happen? The number of open source communities would increase. Contributions to open source communities would increase dramatically. Commercial vendors would decide to cede certain key technologies into the open source community. Business partners and solutions providers would weave together tailored solutions comprised of commercial as well as open source components for their customer bases. Notice that the end customer is probably not all that interested in modifying the code (perhaps they would take a peek inside) but rather Open Source would become simply another part of the broader ecosystem and lose that specialness that has been artificially adhered to open source technology. Suddenly, it would just be assumed to be present, and there would be no need to talk about it anymore, just like the TCP/IP stack, the Internet, or what Sun had envisioned for Java.

Not too surprisingly, when one looks beyond the hype of the moment, much of IBM’s position on Open Source seems to be following the scenario that I laid out. It is unlikely that Big Blue will suddenly cease to have any software revenue, in fact I would argue that it will ultimately rise, as the company expands it reach and boosts the market overall through its many open source contributions. While IBM is not the only vendor that appears to have seen the logic in this course of action, it is clearly the largest. With the Armonk Argonaut on board, it is likely that much of the industry will follow, and will eventually not need to talk about Open Source anymore than it does key foundational technologies.

Analyst Rant

Analyst Rant – How many toolbars does any sane human need?

Every now and then analysts have to get on their soapbox and rant.  Today’s one of my days.  I’ve been ranting lately about communications being behind computing.  So I got talking with another analyst colleague, Dale Vile who owns Freeform Dynamics about the whole thing, and I rambled on about integration and he rambled on about virtualization and without those things, communications is never going to get to the Promised Land.  And this got me thinking about lots of random bits as I was opening Yahoo to read mail there.  I caught the bit about downloading the Yahoo toolbar so I could be notified about yet another inbox deluge.

And this is what led to my current rant which ties in to my earlier ranting.  I am tired of Yahoo and Google and MSN alternating versions of toolbars, of instant messengers, and other bits of software that require me to continue to download new versions, suffer endless rounds of popup messages and so on.  I have to have all these services, because my brothers use one service, my friends back in Boston another, and my colleagues scattered about the globe a third.  The thing is, I need only one instant messenger program, and I need one toolbar.  I do not need two or three.  I may want bits from each of the three, I may want to use all three services, but the underlying service should be singular.  The services should integrate into a single platform.  That would benefit users.  And there’s really no underlying brand value in the base platform is there?  

They all look the same to me taking up all that real estate at the top of my browser.  I know Microsoft and Yahoo are getting to play nice, and there are aggregators like Trillian which I’ve used on and off over the years, but it’s the principle that annoys me.  We’re supposed to be integrating, not creating separate multiple kingdoms.  Sure, people should have choice to download what they want.  But we should also be able to download only the bits we want into the base platform.  Me?  I use Google’s toolbar. Why?  Because they have a version for Mozilla Firefox, my browser of choice. When is this industry going to realize that playing nice leads to better customer experiences than playing difficult?  This isn’t going to stifle competition.  I promise. And while you’re thinking about this, go ask Dale about the difference between an email address and a phone number.  You’ll get an interesting answer.

March 09, 2006

Samsung's Newest "I Want Toy:" The SGH i310

Samsung has announced its new phone, the i310.  Boasting an 8GB embedded hard drive, the phone operates on Windows 5.0 for Smartphone, and also has Video Recording & Messaging, a 2 megapixel digital camera with flash, MP3, dual speakers, Bluetooth technology, USB 2.0, voice recognition, document viewer, and TV-out, among other features.  It is being shown to the public at CeBIT with general availability in Europe during the second half of this year. There is no official pricing guidance as the company indicated it would most likely be dependant upon the mobile contract of the customer.

With the latest in mobile listening technology and up to 4GB hard drive space devoted to music, those who enjoy listening to their tunes on their phone will have about 2000 songs to choose from.  However, this swanky little unit is more than just Samsung’s answer to the iPod.  It also takes movies and pictures and downloads documents from your PC.  And oh yeah, it can make phone calls, too. So another androgynous techno gizmo makes it on the scene.

True mobility continues to get one step closer every year, and the technology to combine all platforms into one seems to be rapidly becoming a reality and will likely be market-driven.  There is a scene in the newest version of the movie Freaky Friday, in which part of the mother’s get-ready-for-the-office morning routine consists of her unplugging about 4 different platforms from their respective chargers and piling them in her purse.  That level of organization seems to be beyond the average person that we’ve met; and as the public loses patience with multiple units, we most likely will be looking for a single platform that can do everything we need it to do.  

Samsung’s SGH i310 is a step in that direction.  Of course, once every application is combined into one mobile platform, then a person’s identity will most likely be compromised when, immediately after downloading all of their pertinent information, they promptly lose the unit. And then again, the sheer demands for all the different kinds of activities we engage these devices in tends to dictate ever more powerful, and larger, form factors, which in turn leads to more miniaturization and the cycle repeats itself. While this is not quite as crazy as the clothing embedded PDAs we have seen demonstrated in shows past, the device still begs the question of how many different ways can it be used and abused. Nevertheless, not that long ago 8GB was unheard of on a laptop, much less a phone. And this new phone has more computing ability than that elderly laptop. Stay tuned…we will be looking back on this in the future laughing on how small the drive on that oversized phone was anyway.

March 06, 2006

When Worlds Collide

When Worlds Collide – Telcom Meets Computing

I admit I am an infrastructure analyst and not a telco analyst.  So my biases should all be readily apparent from the start.  Having said that, I will add that I have been spending quite a bit of time lately on the telco/mobile /networking side of the world, and it’s been an interesting experience.  I’ve come up with some thoughts I wanted to share.

The way I see it, there’s been a bit of a battle going on between the telephony model which to generalize charged for the network but not the equipment, and the computing model which charged for the equipment but not the network. According to all the research one sees floating about, the computing model seems to have won, in the sense that telephony is being integrated into corporate IT rather than the other way around.  Mind you, I’m still thinking more about infrastructure than applications (which is another topic I’m going to leave aside just now.)  Anyhow, it appears that computing is the winning model group for infrastructure provisioning, and that’s having some repercussions going forward for service providers (SPs.)

Perhaps VoIP is the best example of this phenomenon.  Since the network is essentially free in most people’s eyes (assume I’m speaking of a wireline network here), VoIP is taking off, even if quality is less than perfect (most people equate this with the adage “you get what you pay for” anyhow.) This has put fear into the traditional SPs because if they don’t change their model they could just become bit pipes – a less than attractive business model, and hardly a competitive differentiator, so off they go in pursuit of services, new ways to attract and retain business and consumer customers and so forth.

The problem is that the computing industry is 3-5 years ahead of telephony and it shows.  I’m not talking about technology here.  I’m talking about the way the industry thinks, how it sees the future and what it’s focused on.  For anyone who has ever traveled around Europe, think of the way you have to change networks in every country, and how many SPs you have to deal with.  Now imagine if your application infrastructure was that complexly organized.

Or instead, look at the way joint innovation, collaboration and open standards are driving computing.  Look at the way vendors and partners are building ecosystems to drive complete end-to-end solutions for users that combine products, applications and services.  That kind of collaboration is not yet happening in telephony.  The closest we come is the ecosystem in the mobile world, but even there joint collaboration cannot be described yet.  The whole notion of open standards bodies (please do not confuse this with open source or tell me that this is free software – again another issue) and building next generation capabilities together is moving more slowly.  Part of this is due to the lingering presence of government in telephony that has never quite plagued computing.  The telephony network was once quaintly a national issue and you can see the problems that kind of thinking causes if just once you purchase a household appliance in Germany and then try to use it in Italy.

I remain hopeful however, because we’re starting to see groups like the MultiService Forum arise.  Whether or not these groups can be successful is of course dependent on each particular group, but it’s good to see SPs starting to think along these lines.  The continuing strength of companies like Cisco and HP’s quasi-stealth ProCurve networking business are also encouraging.  These companies have feet planted firmly on both sides of the industry and if they’re clever will leverage the strengths of both sides.  For now however, telephony is going to have to go through a bit more angst before it breaks on through to the other side.

February 28, 2006

RSA Conference Thoughts

The 15th annual RSA Conference was a modest success.  There were enough exhibitors that the McEnery Center’s main floor couldn’t hold them all – some were spilled out into the hallway, lining both sides of it.  This is a refreshing change from recent exhibitions, most especially those that happened immediately after the dotcom crash.  It’s one signal that the industry is getting back on its feet and feeling some healthy growth.

One of the buzzwords this year was “compliance.”  Most security vendors seemed to be focused on enabling their customers’ security by making sure all employees and employee devices become compliant with the customers’ standards.  This is all well and good, and in my opinion, an admirable goal.  However, I was reminded of the fashion industry.  Say that someone decides that culottes are “in” this year.  Suddenly, every designer is churning out their version of culottes.  There are different fabrics, different cuts, different colors, and the customer has a wide range of selection… as long as she’s looking to buy culottes.  It may be a phenomenon caused by either a rumor that a name-brand designer is making culottes and then everyone jumps on the bandwagon so that designer can’t corner the market, or it may be an organic outgrowth of the fact that Capri pants were in style last year, so culottes are a natural choice for this year.  Either way, the result is the same:  most everyone is offering culottes in the stores.  And in the security field, most everyone is offering compliance tools.

Of course there are always exceptions – nothing is ever completely one-dimensional.  Along with Microsoft proclaiming that the long-awaited IdentiCards are soon going to be a reality, the big vendors were able to offer entire wardrobes of security choices, while some of the smaller venders focused on the accessories of the security world.  After all, a customer can’t wear just one item.  If that same customer knows what she wants and needs, then she could generally find it at the conference.  

Overall, the people were friendly and informative, the weather was wonderful (especially to someone traveling down from the frozen North), and the tchotkies were plentiful.  The smaller people who live in my house were thrilled with the booty, and I was pleasantly impressed with the overall quality of the conference.



January 11, 2006

Living Large with a bag of Doritos

At the end of last year, just in time for Christmas, a technological child was quietly reborn. A GUI technology that was largely supplanted by the Web, Web Services, and a whole lot more has experienced its first major version release in more than a decade. What is the Christmas rebirth you ask? The X Window System X11R7.0 and its companion X11R6.9. Of course, you already knew that. Alex, I’ll take obscure 1990’s technology for $400. The two releases use the same source code but the X11R7.0 features a modularized and autotooled source code whereas X11R6.9 takes the traditional path with the imake build system. The latest releases are the work of more than fifty volunteer contributors working under the release management team of Kevin Martin, Alan Coopersmith, and Adam Jackson, with the support of Red Hat, and Sun Microsystems.

X Window, X Terminals, and PC X Servers are all memories of a long ago time when I did real technical work for a living and were the basis of my first foray in the market research business. Many a night I spent pulling cables, compiling C code, and munching on a bag of Nacho Cheese Doritos watching the wheels of technology turn slowly and display the results on my terminal. Only real geeks knew about X, and I was a qualified DECNet and TCP/IP geek living large in the glory filled client/server IT cloud of the early 1990s.

So what is the deal here? An excuse to ramble? Well, yes and no. X Window was incredibly relevant when networked computers were typically servers and workstations running UNIX or VMS and the X Terminal was the poor man’s thin client/workstation. Prior to browsers, this was one of the few ways to graphically access remote applications. It was also a time in which TCP/IP was expensive third party add-on software for the desktop (PC that is), and only native in UNIX (definitely not cheap) environments out of the box. Hence, accessing a remote graphical application was pretty darn cool. Times have certainly changed, and most people wouldn’t recognize an X Terminal if they saw it, however, these crude solutions (by today’s standards) would become the forbearer of what for many the Internet achieved.

With UNIX in a matured, if not slightly retrenching state and the market for X Terminals as robust as that for IBM Selectric Typeballs, one could wonder why any further investment in the technology is warranted? The short answer is Linux. The change in how the X Window code is sourced and compiled is also reflective of 10+ years advancement in code development technology and is more line with what techno geeks want to play with anyway. Yes, the X devoted probably could benefit from enhancements, but the real growth opportunity here is for Linux, and Linux geeks, who more than likely are running Linux on a PC and therefore still would not know what an X Terminal is. The irony is that X was developed to liberate applications from a specific display device and make them available across the world of burgeoning 10 Mbps Ethernet. Yet with the deployment demographics of Linux, it would seem that in the future, an ever increasing number of these applications will be displayed on the same device all of the time, namely the PC monitor. The mean distance traveled by an X enabled application may soon be measured in millimeters as opposed to meters or kilometers. How Ironic.

Nevertheless, as fun as waxing poetic about the past is, today is a new time, one in which the basic fabric of computing is light years ahead of where it was 10 years ago and the need for specific technology has changed a great deal. Yet the notion of X Window, the remote access to applications and its community source derived basis, proved harbingers of how we would conduct computing and its development across the board in the not too distant future. So, here’s too you X, happy rebirth; I need to move another cable here in my home network…by the way, anyone seen my Doritos?