Have you ever owned a computer that made you want to pull your hair out? Wondering if your computer would be on the top 10 list of worst computers of all time? You might be in luck. Chassis Plans, a rugged computer manufacturer, has created this interesting infographic outlining some of the worst computers of all time. From the Commodore VIC 20 to the Netbook, this visual takes you through some of the most loathed computers and the features that drove their owners mad. Name a computer problem and one of these computers probably had it. From slow processor speeds to computers that would turn on in the middle of the night to computers that would melt discs, the problems go on and on. Surprisingly some of these computers, despite their problems set records like “the first commercial computer to be used in space” or “the first personal computer to sell more than one million units.”
Archive for the ‘Internet’ Category
Posted in Computer Memory, High Tech History, IBM, Internet, tagged Apple, Apple computer, Commodore, computer, computer history, Gateway, Netbook, Sharp, technology on April 8, 2013 | Leave a Comment »
Siva Vaidhyanathan speaking at Harvard Law School’s Berkman Center on Internet and Society, February 25, 2011
Posted in High Tech History, Internet, tagged AltaVista, Eric Schmidt, Google, Googlization, Libertarianism, Milton Friedman, Neo-Liberalism, Search Engine Optimization, Siva Vaidhyanathan, Tim Berners-Lee, World Wide Web, Yahoo!, Yochai Benkler on March 1, 2011 | 2 Comments »
In the annals of high tech, Google hasn’t been around for very long (1998); but as the successor to such World Wide Web search engines as Yahoo! and AltaVista, Google has in these few short years established itself as the pre-eminent organizer and purveyor of the web’s information.
According to Siva Vaidhyanathan, author of The Googlization of Everything (and why we should worry) and cultural historian and media scholar at the University of Virginia, there are numerous benefits of and many potential negatives with Google’s domination of the web’s infrastructure. As Siva notes, there is a certain “audacity of Google” insofar as it provides ease and pleasure of use; is free (we don’t have to write checks to it, unlike, say, Comcast), and it appeared to “solve the problem of the web”: it made the web infinitely more manageable and removed its “chaos” factor.
An over-arching symbol of Google’s might in The Googlization of Everything is Julius Caesar. Google is compared repeatedly to this Roman emperor who in many ways brought order to chaos in ancient Rome. In Siva’s words, “Chaos on the web demanded governance; it was said to be ungovernable, but we know better. Google (Caesar) came into a vacuum of chaos and declared ‘I will rule benevolently.’”
Siva suggested that he used the word “worry” in his book’s title and not “panic” because when one worries, he or she is capable of thinking; whereas with panic, irrationality is typical. He noted first that in undertaking the book, he found it difficult because of the company’s constantly evolving technology; that is, almost weekly, Google was adding a new attraction (or distraction) to its growing menu of services. Speaking to this point, Siva quoted Harvard Law School professor Yochai Benkler, author of The Wealth of Networks:
Google could become so powerful on the desktop, in the email utility, and on the Web that it will effectively become a super node that will indeed raise the prospect of a re-emergence of a mass-media model.
Google, for its part, says that its mission is “To organize the world’s information and make it universally accessible.” This causes Siva to worry because it appears so all-encompassing and grandiose. I myself would term this phenomenon a kind of “secular divinity.” The feeling that Google manipulates the world’s information as opposed to the web’s is a “game changer.” Having at one’s fingertips a pipeline to the world’s information makes Google seem omniscient, omnipotent and all-benevolent all at once. Sort of like the “man behind the curtain” in the movie The Wizard of Oz. This, as Siva observes, has resulted in an unhealthy “blind faith” in Google’s ability to solve almost any problem. The public has lovingly embraced them with a deep trust in and a suspension of disbelief of their ability – in a technological sense, we’re being cradled in the arms of Morpheus.
A question of regulation
Eric Schmidt, the company’s Chief Executive Officer, when asked if Google should be regulated, offered a predictable denial by saying that the wrong question was being asked, and that Google was “regulated” in a number of ways – including multiple levels of responsibility. He asserted that Google is
run on a set of values and principles upon which the company was founded. Siva noted that this was not a case of “Ayn Rand versus Joseph Stalin”; Google presents a more complex conundrum than just one political extreme or another’s approach toward regulation and responsibility.
According to Siva, Google acts within three different models of content processing: 1) Rank and link; 2) Host and delivery (i.e. YouTube), and 3) Data capture/publishing/content creation (i.e. Google Earth, Google Books and Street View). The integration of these three types of content processing gives Google a roadmap to the whims, desires, interests, and yes, consumer habits of its users, which it uses to sell advertising. As Siva asserts, we are Google’s “customers.” They take our information and provide advertisements that are very specifically targeted to our individual tastes. Google’s algorithm – their method of ranking search results – has made this a reality.
SEO Arms Race
Search Engine Optimization, or SEO, has been a battleground on Google’s site where largely commercial websites have employed questionable tactics to achieve greater ranking in searches. Sites like JC Penney and Overstock.com have been specifically cited for inserting content in their sites (such as .edu hyperlinks) that deems them, in Google parlance, a “high-quality” site. Siva also cites the Huffington Post as a site that has mastered SEO techniques. They engage in “repurposing” original material from other websites in such a way that it will give them priority in any search.
Google is constantly innovating and evolving. It concentrates on speed (they say 1/10th of a second matters to consumers), and has begun to take on Bing.com as the conduit to shopping. Siva declared that Bing has consistently been the search engine for shoppers; but that Google has made significant inroads. And as a result, information and learning have both been subjugated. In this manner, according to Siva, consumer satisfaction has been used to short-circuit political involvement and awareness. Google has combined this with an overt appeal to “corporate social responsibility” – an essential component of both libertarianism and neo-liberalism, which hold that market forces and consumer choice are instrumental to the exercise of social responsibility. Siva quotes the late economist Milton Friedman, who said “The social responsibility of business is to increase its profits.”
In the lightning-quick evolution of the World Wide Web, stemming from its origins with MIT’s Tim Berners-Lee, it’s important to recognize and understand that Google’s influence as a start-up company was vastly different than it is today – a global institution. And the functions that comprise it today will likely considerably evolve in the next ten years. With the rate at which Google has penetrated both the consciousness and information consumption habits of the world’s computer users, there is always room for healthy concern. Siva, though predominantly an optimist who acknowledges Google has positively revolutionized the way we access information, also believes we should temper that by looking at the company more closely and realistically than our rose-colored glasses might ordinarily allow us to.
Here’s a great clip from a January, 1994 episode of the Today Show, where co-hosts Bryant Gumbel, Katie Couric and Elizabeth Vargas appear completely flummoxed about just what the Internet is. Shows you just how far and how quickly we’ve come. Hey, maybe they should have asked High Tech History – after all, we did an entire post on the history of the “@” symbol!
In English it’s referred to as the “at” or “ape tail”; the “arroba” in Spanish; the “chiocciola” in Italian. The Germans call it a “monkey’s tail,” and the Chinese “little mouse.” The Russians think of it as a dog, and the Finns as a slumbering cat. The ”@” symbol, the ubiquitous presence in electronic or “e” mail is easy to overlook in the course of our daily Internet correspondence; but although there are disagreements to its precise origin, its central role in modern communication had utilitarian, if not somewhat random, beginnings. We do know it first appeared on a typewriter – an American Underwood - in 1885 and was used, mostly in accounting documents, as shorthand for “at the rate of.”
According to Yaelf.com, which is devoted to English language history, “That the @ symbol finally became part of cyberspace is due to Ray Tomlinson, an American engineer who is one of the founding fathers of the Internet, or actually the Arpanet [at Beranek, Brown and Newman], the predecessor to the present Internet. [In 1971] Tomlinson invented a system for individual electronic mail, introducing the first “hot” application of the Arpanet. He used the @ symbol to distinguish a sender’s or addressee’s name from the name of the electronic mail box. According to Giorgio Stabile, a professor of history at Rome’s La Sapienza university, Tomlinson chose this symbol “just because it was on the keyboard.”
Noticing that the symbol sat obscurely on the keyboard for all of those intervening years, Tomlinson wanted something that would indicate that the user was “at” an actual computer writing out a message. Another theory, apocryphal perhaps, has suggested Tomlinson selected the “@” in less than a minute of consideration.
Now that the Internet and email have become commercial, if not cultural touchstones in our society, the “@” symbol has inexorably captured the attention of design scholars and enthusiasts. The New York Times this week ran a story about the symbol’s inclusion in New York’s Museum of Modern Art’s Architecture and Design Collection. According to the Times, there are at least a tandem of reasons for this:
First, both the old and new @ fulfill the same function of simplifying and clarifying something that’s fiendishly complicated to make and interpret: handwritten script and computer code respectively. Paola Antonelli, senior curator of architecture and design at MoMA, describes that as “an act of design of extraordinary elegance and economy.” Both qualities are prized by MoMA, especially “economy” in a time of recession and environmental crisis, when reinventing something that’s under-used seems much smarter than designing something new.
Timeliness matters to MoMA too, and the new @ is timely not only in its economy but also precisely because it is not physical (just like equally dynamic areas of contemporary design such as software and social design). “MoMA’s collection has always been in touch with its time,” Ms. Antonelli said, “and design these days is often an act with aesthetic and ethical consequences, not necessarily a physical object.”
In conclusion, the hastily considered selection of the “@” symbol, which at the time represented a practical response to a communications need, has, with its elevation to almost mythological status, impacted not only our daily lives, but our aesthetic conscience. It excels not only in form and function, but according to the MoMA, also embodies the values of clarity, honesty and simplicity that MoMA considers essential to good design. As a personal note, in the early 1990s, I purchased a “Swatch” watch that includes a big red “@” on a white field. Timely, indeed, in a very literal sense.
On December 16, 2003, President George W. Bush signed the CAN-SPAM Act of 2003 into law. CAN-SPAM derives from the bill’s full name: Controlling the Assault of Non-Solicited Pornography And Marketing Act of 2003. The new law established the first national standards for sending commercial e-mail. While it doesn’t allow e-mail recipients to sue spammers or file class-action lawsuits, it does allows enforcement by the Federal Trade Commission (FTC), State Attorneys General, Internet service providers (ISP), and other federal agencies for special categories of spammers (such as banks).
Senator John McCain is responsible for a last-minute amendment that makes businesses promoted in spam subject to FTC penalties if they knew or should have known that their business was being promoted by the use of spam. This was designed to close a loophole for affiliate programs that allowed spammers to abuse their programs. It also encourages them to assist the FTC in identifying spammers.
Today, AOL stock begins trading on the NYSE on the S&P MidCap 400 Index as it spins off from Time Warner. Back in 2001, when the merger of Time Warner and AOL took place, AOL stock was valued at as much as $165 billion. Today, AOL stock is valued around $2.8 billion.
AOL was one of the first Internet stocks added to the S&P 500 Index in the 1990′s which gave a lot a credibility to the hot new Internet stocks that were displacing more established firms on the stock exchange. Today, AOL doesn’t qualify for the S&P 500 Index because its value is below $3 billion. If the stock crosses the line, it can move up from S&P MidCap 400 Index and back to the S&P 500 Index.
Who’s Place Did AOL Take On The New York Stock Exchange?
AOL replaces Imation Corp., whose market capitalization dropped below the $750 million minimum requirement to remain on the NYSE. Imation is a spin-off of 3M that designs, manufactures, and markets a wide range of recordable data storage media and consumer electronics products.
Will The New AOL Make It?
We’ll find out how AOL makes it on its own. Today, AOL has a new look, new logo, and new ad campaign. Its CEO, Tim Armstrong, joined AOL last March from Google and has plans for AOL’s growth that include delivery of premium content that include news and local information, communications like instant messaging and online advertising. AOL is way beyond “You’ve Got Mail,” but we’ll see if it’s got enough mojo to propel it back to the S&P 500. Let’s just hope that history doesn’t repeat itself.
The New York Times reported yesterday (Dec. 7th) that there was a reunion last month of colleagues who pioneered the Stanford Artificial Intelligence Laboratory. They met over two days at the William Gates Computer Center on the Stanford campus.
According to the article’s author, John Markoff, there were other pioneering labs at Stanford, but the A.I. lab received less recognition than its peers:
“One laboratory, Douglas Engelbart’s Augmentation Research Center, became known for the mouse; a second, Xerox’s Palo Alto Research Center, developed the Alto, the first modern personal computer. But the third, the Stanford Artificial Intelligence Laboratory, or SAIL, run by the computer scientist John McCarthy, gained less recognition.”
SAIL was begun by Dr. John McCarthy (who coined the term “artificial intelligence”) in 1963. Les Earnest was its deputy director. During that time, McCarthy’s initial proposal, to the Advanced Research Projects Agency of the Pentagon, envisioned that building a thinking machine would take about a decade. In 1966, the laboratory took up residence in the foothills of the Santa Cruz Mountains behind Stanford in an unfinished corporate research facility that had been intended for a telecommunications firm.
Markoff continues, “SAIL researchers embarked on an extraordinarily rich set of technical and scientific challenges that are still on the frontiers of computer science, including machine vision and robotic manipulation, as well as language and navigation.”
This group of alumni distinguished themselves in other innovative and distinctive ways - with artificial intelligence at the heart of their experimentation. As Markoff notes, “… Raj Reddy and Hans Moravec went on to pioneer speech recognition and robotics at Carnegie Mellon University. Alan Kay brought his Dynabook portable computer concept first to Xerox PARC and later to Apple. Larry Tesler developed the philosophy of simplicity in computer interfaces that would come to define the look and functioning of the screens of modern Apple computers — what is called the graphical user interface, or G.U.I.”
“John Chowning, a musicologist, referred to SAIL as a ‘Socratean abode.’ He was invited to use the mainframe computer at the laboratory late at night when the demand was light, and his group went on to pioneer FM synthesis, a technique for creating sounds that transforms the quality, or timbre, of a simple waveform into a more complex sound. (The technique was discovered by Dr. Chowning at Stanford in 1973 and later licensed to Yamaha.)”
As has been noted previously in “High Tech History,” Spacewar was, in essence the first video game which was programmed with a Digital Equipment Corp. PDP-1 computer. At Stanford, Joel Pitts, a protege of SAIL’s Don Knuth (who wrote definitive texts on computer programming), “… took a version of the Spacewar computer game and turned it into the first coin-operated video game — which was installed in the university’s student coffee house — months before Nolan Bushnell did the same with Atari.”
In 1980, the lab merged with Stanford’s computer science department, reopened in 2004, and is now enjoying something of a rebirth. Markoff concludes,
“The reunion also gave a hint of what is to come. During an afternoon symposium at the reunion, several of the current SAIL researchers showed a startling video called “Chaos” taken from the Stanford Autonomous Helicopter project. An exercise in machine learning, the video shows a model helicopter making a remarkable series of maneuvers that would not be possible by a human pilot. The demonstration is particular striking because the pilot system first learned from a human pilot and then was able to extend those skills.
But an artificial intelligence? It is still an open question. In 1978, Dr. McCarthy wrote, “human-level A.I. might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.”
On November 4th, the Mass Technology and Leadership Council honored Leo Beranak, founder of Bolt, Beranek, and Newman (BBN) with a Lifetime Achievement Award. Beranek, 95, was a pioneer in the field of acoustics as an MIT professor and Boston television as a co-owner of WCVB-TV, Channel 5. Many know him best as founder of the company that includes his name.
History of BBN
BBN, was founded in 1948, by Leo Beranek and Richard Bolt, professors at MIT, with Bolt’s former student Robert Newman. The acoustical consulting business was spun off into Acentech Incorporated. The calculations involved in the acoustical work required computers and BBN knew that if they were ever going to become a large company, they couldn’t just focus on acoustics. So, BBN bought a number of computers in the late 1950s and early 1960s, notably the first production PDP-1 from Digital Equipment Corporation which they beta tested.
Some of BBN’s notable developments in the field of computer networks are the ARPANET, the forerunner to today’s Internet, the first e-mail sent, and the use of the @ sign in an email address. BBN also developed the first Internet protocol router, an early predecessor of voice over IP, and the first time sharing system. And, the list of BBN developments goes on and on as does the list of well-known computer icons have worked at BBN.
Leo Beranek Reflects
Mass High Tech News just published a great interview with Leo Beranek, 95 that is worth a read. It covers three themes woven throughout his history: innovation, entrepreneurship and his unique ability to connect with people. He begins by talking about his humble beginnings in Iowa where he first became interested in acoustics through his drum set and by tinkering with a Crosley one-tube radio. He talks about starting BBN in 1948 and hiring first class people. At the time, there was no venture capital, so they stated the company with a bank loan.
Realizing that he’d never grow to company to a large size if he just focused on acoustics, he hired J. C. R. Licklider. He convinced BBN to spend $30,000 on a computer that they didn’t know quite what they were going to do with. This seemed crazy at the time because they’d never bought anything more expensive than a drinking fountain before, but they made the investment. With Lickliter learning digital programming, one thing led to another and BBN ended up with the DEC PDP-1, which Beranek said “looked like two or three refrigerators standing side by side.”
For more information on Beranek’s thoughts on what the Internet has become today, running a local TV station, some of the great acoustical halls, and maintaining a work/life balance read his autobiography “Riding the Waves: A Life in Sound, Science, and Industry.“
– Carole Gunst