Tuesday, April 15, 2008

The hemolytic anemias

The hemolytic anemias are characterized by excessive blood destruction which results in such symptoms as jaundice, formation of gallstones, and increased amounts of urobilinogen and urobilin in the stools and urine. Phagocytosis of red blood cells undergoing destruction with deposition of hemo­siderin results in splenomegaly. The acute forms may result from bloodstream infec­tions or various types of hemolytic poisons. The chronic forms are often congenital dis­orders and include familial hemolytic icterus, Mediterranean anemia and sickle cell anemia.Acute Hemolytic Anemia. Acute hemolytic anemia may result from mismatched transfusions, hemolytic toxins, such as snake venoms, bacterial toxins, phenyl hydrazine or phenol, and from malaria or bartonellosis. The symptoms include chills and rigor, vomiting and diarrhea, pain in the back and legs, hemoglobinuria, albuminuria and anuria. The red blood cells show marked variation in size and staining re­action. There is striking evidence of regen­eration, including reticulocytosis and cir­culating normoblasts. Treatment consists of removal of the cause.

Sunday, April 13, 2008

Using the Internet to your Advantage

Coming from the "Internet generation," I understand the importance of doing your business, or at least advertising your business online. The Internet has opened up a wealth of possibilities to businesses that were simply not available 10 or 15 years ago. But before I start telling my age, please let me explain further.
Open 24 Hours a Day, 7 Days a Week
The Internet never closes down for the end of the business day or the weekend. Your businesses information and contact information is available at any time. This means that potential clients can research and decide if they want to do business with you, without having to contact you. Before, the only time a potential client could inquire about your business was during business hours, unless you wanted to give out your private phone numbers and cell phone numbers. Now, they can send you an email, and you can answer it at your convenience.
More Information Then Ever Before
The Internet does not constrain you to a certain number of words. You literally have unlimited amounts of space to talk about, advertise, and display information about your business. Before, you would have to fit what you felt was the most important information into a 30 second commercial, or a specific size of brochure.
Global Users
Before the Internet, you were required to spend large amounts of money to advertise anywhere but your local area. You had to rely on people driving by or hearing/seeing one of your local ads in order to do business with them. The Internet however, is reachable by every country on the planet, and a website or online ads usually costs exactly the same whether people from your hometown, or people from the other side of the world are viewing it.
Changeable Content
When creating a business brochure or handout, you had to make sure you only put none-dateable information in there, because after all, once they are printed, you cannot change the information. This created potentially large advertising costs, and sometimes many wasted ads due to changing information. Likewise, you were unable to change the text or print on a radio or TV ad until that ad ran its campaign. Online, changing information is as simple as point and click. Your phone number changed? Simple, log onto your website or ad and change it.
Small Cost, Large Results
We all know how expensive traditional advertising campaigns can be. A simple ad in the newspaper now costs about $15 a week, a billboard around $600 a month. However, advertising online is far less expensive. For example: a static, 5 page website will cost you around $109 (if you choose the right designer) and about $3 a month after that. That is a total of $145 dollars a year. Posting to search engines where clients can find your business is free, and allows visitors to search for, and find your website. If you choose to advertise your website on other networks, you only pay when someone actually clicks on your ad. That allows you to control the amount of money, and you only pay for the people that actually see your website. Small businesses or businesses with small budgets and get, and maintain a website for about $.39 a day.
Overall, the Internet is not one of the best ways to successfully advertise your business. If you are new to the Internet, you may want to consider hiring a professional to develop and maintain your advertisements for you. When searching for a designer and web developer, please pay special attention to their details. Here are a few things that you will want to avoid:
1)Template Websites- Templates, sometimes called cookie cutter websites, are pre made designs and set ups of a website. Any number of other websites can use the every same template for their website. In the event that a potential customer has visited a website with a similar template to yours, then chances are they will either not take your business as serious, or reference your business with that website. Ask the potential designer if your website will be unique or made from a template before you do business with them.
2) Hourly Rate Design- Some web design companies offer "hourly rate web design." It has been found that many hourly designers will tell you something takes much longer to complete then it usually does. In this case, you will end up paying for hours of work when the work only actually took a few minutes, or they will hold your website hostage until you do. Look for set packages that list what you will get, no matter how much time it took to complete.
3) High Hosting Charges- Many new business owners and business owners not familiar with the Internet will let their web designer host their website for them. While there is nothing wrong with this, be cautious of the amount you will be paying month for their hosting. Small businesses that only require small website should not pay over $3-$7 a month for hosting. Larger businesses or businesses with large or dynamic websites should never pay over $10-$15 a month. On the other hand, you may find a design company offering free hosting as an introductory offer. If this is true, be sure to ask how much you will be paying for hosting once the offer expires.
4) Long Completion Times- Web design is a complicated process, however, a good designer can finish a website in a rather small amount of time. Be cautious of web designers that take a long time to complete each website. This may be a sign that they may have to much on their plate. If that is the case, then chances are you will not receive the proper attention and care that is needed to make a website a success. Always ask a potential designer how long it typically takes them to finish and publish a website
By: Rebecca Clary

software outsourcing-by Dennis Ritchie

Software outsourcing is a job contract awarded by a contracting party from a country to a company having a team of human resource with a definite skill set sitting in another country. On completion of the proposed job the software is shipped to the clients organization.The need to rapidly complete large development projects has cause many organizations in recent years to consider outsourcing work rather than perform development in-house. A major reason for doing this is the reduction in costs that can be achieved, compared to those incurred if the work is executed in their own IT department. Development time can also be shortened, especially if the company does not already have the skills required to undertake the project. Thus offshore software development has seen tremendous growth in the last few years. Offshore outsourcing provides the ability to hold skilled overseas staff at a small part of the human resource cost which is elating to several entrepreneurs.So the key to success of a offshore software project is the smooth flow of communication between the offshore vendor and the onshore client. Continuous constructive dialogue between the two sides is not limited to verbal communication, but is aggressively pursued in writing, meetings and conferences. So local presence of the offshore vendor can be a huge advantage for the client though it might cost the client more.The client should be very careful about the project budget. Proper budget forecast includes vendor rates, risk involvement, scope of change in the project specification and vendor resource matrix. The financial terms should also be decided and agreed upon beforehand. Both the parties should maintain adequate transparency in order to achieve successful completion of the work.Further, offshore development requires a methodology quite different from local development. For example, an onsite development team can resolve critical issues by meeting in a conference room. When teams are diverse, you have to create a process that automatically keeps everyone in the loop.So many companies are gradually setting up their own offshore development centers. Though it is not always a feasible solution to small or medium enterprises. They can consider the option is of taking a stake (in the form of equity partnership or acquisition or merger) in an existing offshore company with an impressive employee retention rate and a stellar track record, ensuring quality management and a loyal workforce.Provided you adhere to process and ensure accurate and prompt completion of projects, there is a significant savings from moving work offshore. The hardest part to quantify is the ability to get something done quickly and getting it to market ahead of your competition. For others with the time to learn the culture and establish relationships, the need to have at their bidding a full-fledged offshore team, and the money to invest, starting their own offshore center makes sense.

Tuesday, April 8, 2008

MARKETING MANAGEMENT



MARKETING MANAGEMENT

Definition and scope
There is no universally accepted definition of the term. In part, this is due to the fact that the role of a marketing manager can vary significantly based on a business' size, corporate culture, and industry context. For example, in a large consumer products company, the marketing manager may act as the overall general manager of his or her assigned product category or brand with full profit & loss responsibility. In contrast, a small law firm may have no marketing personnel at all, requiring the firm's partners to make marketing management decisions on a largely ad-hoc basis.
In the widely used text Marketing Management (2006), Philip Kotler and Kevin Lane Keller define marketing management as "the art and science of choosing target markets and getting, keeping and growing customers through creating, delivering, and communicating superior customer value."
From this perspective, the scope of marketing management is quite broad. The implication of such a definition is that any activity or resource the firm uses to acquire customers and manage the company's relationships with them is within the purview of marketing management. Additionally, the Kotler and Keller definition encompasses both the development of new products and services and their delivery to customers.
Noted marketing expert Regis McKenna expressed a similar viewpoint in his influential 1991 Harvard Business Review article "Marketing is Everything." McKenna argued that because marketing management encompasses all factors that influence a company's ability to deliver value to customers, it must be "all-pervasive, part of everyone's job description, from the receptionists to the Board of Directors."
This view is also consistent with the perspective of management guru Peter Drucker, who wrote: "Because the purpose of business is to create a customer, the business enterprise has two--and only these two--basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business."
But because many businesses operate with a much more limited definition of marketing, such statements can appear controversial, or even ludicrous to some business executives. This is especially true in those companies where the marketing department is responsible for little more than developing sales brochures and executing advertising campaigns.
The broader, more sophisticated definitions of marketing management from Drucker, Kotler and other scholars are therefore juxtaposed against the narrower operating reality of many businesses. The source of confusion here is often that inside any given firm, the term marketing management may be interpreted to mean whatever the marketing department happens to do, rather than a term that encompasses all marketing activities -- even those marketing activities that are actually performed by other departments, such as the sales, finance, or operations departments. If, for example, the finance department of a given company makes pricing decisions (for deals, proposals, contracts, etc.), that finance department has responsibility for an important component of marketing management -- pricing.
Activities and functions
Marketing management therefore encompasses a wide variety of functions and activities, although the marketing department itself may be responsible for only a subset of these. Regardless of the organizational unit of the firm responsible for managing them, marketing management functions and activities include the following:
Marketing research and analysis

In order to make fact-based decisions regarding marketing strategy and design effective, cost-efficient implementation programs, firms must possess a detailed, objective understanding of their own business and the market in which they operate. In analyzing these issues, the discipline of marketing management often overlaps with the related discipline of strategic planning.
Traditionally, marketing analysis was structured into three areas: Customer analysis, Company analysis, and Competitor analysis (so-called "3Cs" analysis). More recently, it has become fashionable in some marketing circles to divide these further into certain five "Cs": Customer analysis, Company analysis, Collaborator analysis, Competitor analysis, and analysis of the industry Context.
The focus of customer analysis is to develop a scheme for market segmentation, breaking down the market into various constituent groups of customers, which are called customer segments or market segments. Marketing managers work to develop detailed profiles of each segment, focusing on any number of variables that may differ among the segments: demographic, psychographic, geographic, behavioral, needs-benefit, and other factors may all be examined. Marketers also attempt to track these segments' perceptions of the various products in the market using tools such as perceptual mapping.
In company analysis, marketers focus on understanding the company's cost structure and cost position relative to competitors, as well as working to identify a firm's core competencies and other competitively distinct company resources. Marketing managers may also work with the accounting department to analyze the profits the firm is generating from various product lines and customer accounts. The company may also conduct periodic brand audits to assess the strength of its brands and sources of brand equity.
The firm's collaborators may also be profiled, which may include various suppliers, distributors and other channel partners, joint venture partners, and others. An analysis of complementary products may also be performed if such products exist.
Marketing management employs various tools from economics and competitive strategy to analyze the industry context in which the firm operates. These include Porter's five forces, analysis of strategic groups of competitors, value chain analysis and others.Depending on the industry, the regulatory context may also be important to examine in detail.
In Competitor analysis, marketers build detailed profiles of each competitor in the market, focusing especially on their relative competitive strengths and weaknesses using SWOT analysis. Marketing managers will examine each competitor's cost structure, sources of profits, resources and competencies, competitive positioning and product differentiation, degree of vertical integration, historical responses to industry developments, and other factors.
Marketing management often finds it necessary to invest in research to collect the data required to perform accurate marketing analysis. As such, they often conduct market research (alternately marketing research) to obtain this information. Marketers employ a variety of techniques to conduct market research, but some of the more common include:
Qualitative marketing research, such as focus groups Quantitative marketing research, such as statistical surveys Experimental techniques such as test markets Observational techniques such as ethnographic (on-site) observation Marketing managers may also design and oversee various environmental scanning and competitive intelligence processes to help identify trends and inform the company's marketing analysis.
Marketing strategy

Once the company has obtained an adequate understanding of the customer base and its own competitive position in the industry, marketing managers are able to make key strategic decisions and develop a marketing strategy designed to maximize the revenues and profits of the firm. The selected strategy may aim for any of a variety of specific objectives, including optimizing short-term unit margins, revenue growth, market share, long-term profitability, or other goals.
To achieve the desired objectives, marketers typically identify one or more target customer segments which they intend to pursue. Customer segments are often selected as targets because they score highly on two dimensions:

1) The segment is attractive to serve because it is large, growing, makes frequent purchases, is not price sensitive (i.e. is willing to pay high prices), or other factors; and

2) The company has the resources and capabilities to compete for the segment's business, can meet their needs better than the competition, and can do so profitably.In fact, a commonly cited definition of marketing is simply "meeting needs profitably."
The implication of selecting target segments is that the business will subsequently allocate more resources to acquire and retain customers in the target segment(s) than it will for other, non-targeted customers. In some cases, the firm may go so far as to turn away customers that are not in its target segment. The doorman at a swanky nightclub, for example, may deny entry to unfashionably dressed individuals because the business has made a strategic decision to target the "high fashion" segment of nightclub patrons.
In conjunction with targeting decisions, marketing managers will identify the desired positioning they want the company, product, or brand to occupy in the target customer's mind. This positioning is often an encapsulation of a key benefit the company's product or service offers that is differentiated and superior to the benefits offered by competitive products. For example, Volvo has traditionally positioned its products in the automobile market in North America in order to be perceived as the leader in "safety", whereas BMW has traditionally positioned its brand to be perceived as the leader in "performance."
Ideally, a firm's positioning can be maintained over a long period of time because the company possesses, or can develop, some form of sustainable competitive advantage.The positioning should also be sufficiently relevant to the target segment such that it will drive the purchasing behavior of target customers.

Marketing plan

After the firm's strategic objectives have been identified, the target market selected, and the desired positioning for the company, product or brand has been determined, marketing managers focus on how to best implement the chosen strategy. Traditionally, this has involved implementation planning across the "4Ps" of marketing: Product management, Pricing, Place (i.e. sales and distribution channels), and Promotion.
Taken together, the company's implementation choices across the 4Ps are often described as the marketing mix, meaning the mix of elements the business will employ to "go to market" and execute the marketing strategy. The overall goal for the marketing mix is to consistently deliver a compelling value proposition that reinforces the firm's chosen positioning, builds customer loyalty and brand equity among target customers, and achieves the firm's marketing and financial objectives.
In many cases, marketing management will develop a marketing plan to specify how the company will execute the chosen strategy and achieve the business' objectives. The content of marketing plans varies from firm to firm, but commonly includes:
An executive summary
Situation analysis to summarize facts and insights gained from market research and marketing analysis The company's mission statement or long-term strategic vision A statement of the company's key objectives, often subdivided into marketing objectives and financial objectives The marketing strategy the business has chosen, specifying the target segments to be pursued and the competitive positioning to be achieved Implementation choices for each element of the marketing mix (the 4Ps) A summary of required investments (in people, programs, IT systems, etc.) Financial analysis, projections and forecasted results A timeline or high-level project plan Metrics, measurements and control processes A list of key risks and strategies for managing these risks
Project, process, and vendor management
Once the key implementation initiatives have been identified, marketing managers work to oversee the execution of the marketing plan. Marketing executives may therefore manage any number of specific projects, such as sales force management initiatives, product development efforts, channel marketing programs and the execution of public relations and advertising campaigns. Marketers use a variety of project management techniques to ensure projects achieve their objectives while keeping to established schedules and budgets.
More broadly, marketing managers work to design and improve the effectiveness of core marketing processes, such as new product development, brand management, marketing communications, and pricing. Marketers may employ the tools of business process reengineering to ensure these processes are properly designed, and use a variety of process management techniques to keep them operating smoothly.
Effective execution may require management of both internal resources and a variety of external vendors and service providers, such as the firm's advertising agency. Marketers may therefore coordinate with the company's Purchasing department on the procurement of these services.
Organizational management and leadership
Marketing management usually requires leadership of a department or group of professionals engaged in marketing activities. Often, this oversight will extend beyond the company's marketing department itself, requiring the marketing manager to provide cross-functional leadership for various marketing activities. This may require extensive interaction with the human resources department on issues such as recruiting, training, leadership development, performance appraisals, compensation, and other topics.
Marketing management may spend a fair amount of time building or maintaining a marketing orientation for the business. Achieving a market orientation, also known as "customer focus" or the "marketing concept", requires building consensus at the senior management level and then driving customer focus down into the organization. Cultural barriers may exist in a given business unit or functional area that the marketing manager must address in order to achieve this goal. Additionally, marketing executives often act as a "brand champion" and work to enforce corporate identity standards across the enterprise.
In larger organizations, especially those with multiple business units, top marketing managers may need to coordinate across several marketing departments and also resources from finance, R&D, engineering, operations, manufacturing, or other functional areas to implement the marketing plan. In order to effectively manage these resources, marketing executives may need to spend much of their time focused on political issues and inte-departmental negotiations.
The effectiveness of a marketing manager may therefore depend on his or her ability to make the internal "sale" of various marketing programs equally as much as the external customer's reaction to such programs.
Reporting, measurement, feedback and control systems
Marketing management employs a variety of metrics to measure progress against objectives. It is the responsibility of marketing managers -- in the marketing department or elsewhere -- to ensure that the execution of marketing programs achieves the desired objectives and does so in a cost-efficient manner.
Marketing management therefore often makes use of various organizational control systems, such as sales forecasts, sales force and reseller incentive programs, sales force management systems, and customer relationship management tools (CRM). Recently, some software vendors have begun using the term "marketing operations management" or "marketing resource management" to describe systems that facilitate an integrated approach for controlling marketing resources. In some cases, these efforts may be linked to various supply chain management systems, such as enterprise resource planning (ERP), material requirements planning (MRP), efficient consumer response (ECR), and inventory management systems.
Measuring the return on investment (ROI) of and marketing effectiveness various marketing initiatives is a significant problem for marketing management. Various market research, accounting and financial tools are used to help estimate the ROI of marketing investments. Brand valuation, for example, attempts to identify the percentage of a company's market value that is generated by the company's brands, and thereby estimate the financial value of specific investments in brand equity. Another technique, integrated marketing communications (IMC), is a CRM database-driven approach that attempts to estimate the value of marketing mix executions based on the changes in customer behavior these executions generate.


Marketing management is a business discipline focused on the practical application of marketing techniques and the management of a firm's marketing resources and activities. Marketing managers are often responsible for influencing the level, timing, and composition of customer demand in a manner that will achieve the company's objectives

The nanotechnology


Overview
In 1965, Gordon Moore, one of the founders of Intel Corporation, made the astounding prediction that the number of transistors that could be fit in a given area would double every 18 months for the next ten years. This it did and the phenomenon became known as Moore's Law. This trend has continued far past the predicted 10 years until this day, going from just over 2000 transistors in the original 4004 processors of 1971 to over 40,000,000 transistors in the Pentium 4. There has, of course, been a corresponding decrease in the size of individual electronic elements, going from millimeters in the 60's to hundreds of nanometers in modern circuitry.
At the same time, the chemistry, biochemistry and molecular genetics communities have been moving in the other direction. Over much the same period, it has become possible to direct the synthesis, either in the test tube or in modified living organisms, of larger and larger and more and more complex molecular structures, up to tens or hundreds of nanometers in size. Enzymes are the molecular devices that drive life and in recent years it has both become possible to manipulate the structures and functions of these systems in vivo and to build complex biomimetic analogues in vitro.
Finally, the last quarter of a century has seen tremendous advances in our ability to control and manipulate light. Solid state lasers are now available for less than the price of a hamburger. We can generate light pulses as short as a few femtoseconds (1 fs = 10−15 s). We can image light with computers. And we can send information almost noiselessly along fiber optics at bandwidths of many gigabytes. Light too has a size and this size is also on the hundred nanometer scale.
Thus now, at the beginning of a new century, three powerful technologies have met on a common scale — the nanoscale — with the promise of revolutionizing both the worlds of electronics and of biology. This new field, which we refer to as biomolecular nanotechnology, holds many possibilities from fundamental research in molecular biology and biophysics to applications in biosensing, biocontrol, bioinformatics, genomics, medicine, computing, information storage and energy conversion.

Historical background

Humans have unwittingly employed nanotechnology for thousands of years, for example in making steel, paintings and in vulcanizing rubber. Each of these processes rely on the properties of stochastically-formed atomic ensembles mere nanometers in size, and are distinguished from chemistry in that they don't rely on the properties of individual molecules. But the development of the body of concepts now subsumed under the term nanotechnology has been slower.
The first mention of some of the distinguishing concepts in nanotechnology (but predating use of that name) was in 1867 by James Clerk Maxwell when he proposed as a thought experiment a tiny entity known as Maxwell's Demon able to handle individual molecules.
The first observations and size measurements of nano-particles was made during first decade of 20th century. They are mostly associated with the name of Zsigmondy who made detail study of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. He used ultramicroscope that employes dark field method for seeing particles with sizes much less than light wavelength. Zsigmondy was also the first who used nanometer explicitly for characterizing particle size. He determined it as 1/1,000,000 of millimeter. He developed a first system classification based on particle size in nanometer range.
There have been many significant developments during 20th century in characterizing nanomaterials and related phenomena, belonging to the field of interface and colloid science. In the 1920s, Irving Langmuir and Katharine B. Blodgett introduced the concept of a monolayer, a layer of material one molecule thick. Langmuir won a Nobel Prize in chemistry for his work. In early 1950s, Derjaguin and Abrikosova conducted the first measurement of surface forces.
There have been many studies of periodic colloidal structures and principles of molecular self-assembly that are overviewed in the paper. There are many other discoveries that serve as scientific basis for the modern nanotechnology can be found in the "Fundamentals of Interface and Colloid Science by H.Lyklema.

Conceptual origins

The topic of nanotechnology was again touched upon by "There's Plenty of Room at the Bottom," a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears feasible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. At the meeting, Feynman announced two challenges, and he offered a prize a $1000 for the first individuals to solve each one. The first challenge involved the construction of a nanomotor, which, to Feynman's surprise, was achieved by November of 1960 by William McLellan. The second challenge involved the possibility of scaling down letters small enough so as to be able to fit the entire Encyclopedia Britannica on the head of a pin; this prize was claimed in 1985 by Tom Newman.
In 1965 Gordon Moore observed that silicon transistors were undergoing a continual process of scaling downward, an observation which was later codified as Moore's law. Since his observation transistor minimum feature sizes have decreased from 10 micrometers to the 45-65 nm range in 2007; one minimum feature is thus roughly 180 silicon atoms long
The term "nanotechnology" was first defined by Tokyo Science University, Norio Taniguchi in a 1974 paper (N. Taniguchi, "On the Basic Concept of 'Nano-Technology'," Proc. Intl. Conf. Prod. Eng. Tokyo, Part II, Japan Society of Precision Engineering, 1974.) as follows: "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or one molecule." Since that time the definition of nanotechnology has generally been extended upward in size to include features as large as 100 nm. Additionally, the idea that nanotechnology embraces structures exhibiting quantum mechanical aspects, such as quantum dots, has been thrown into the definition.
Also in 1974 the process of atomic layer deposition, for depositing uniform thin films one atomic layer at a time, was developed and patented by Dr. Tuomo Suntola and co-workers in Finland.
In the 1980s the idea of nanotechnology as deterministic, rather than stochastic, handling of individual atoms and molecules was conceptually explored in depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology and Nanosystems: Molecular Machinery, Manufacturing, and Computation, (ISBN 0-471-57518-6). Drexler's vision of nanotechnology is often called "Molecular Nanotechnology" (MNT) or "molecular manufacturing," and Drexler at one point proposed the term "zettatech" which never became popular.

Experimental advances

Nanotechnology and nanoscience got a boost in the early 1980s with two major developments: the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1985 and the structural assignment of carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals were studied. This led to a fast increasing number of semiconductor nanoparticles of quantum dots.
In the early 1990s Huffman and Kraetschmer, of the University of Arizona, discovered how to synthesize and purify large quantities of fullerenes. This opened the door to their characterization and functionalization by hundreds of investigators in government and industrial laboratories. Shortly after, rubidium doped C60 was found to be a mid temperature (Tc = 32 K) superconductor. At a meeting of the Materials Research Society meeting in 1992, Dr. T. Ebbesen (NEC) described to a spellbound audience his discovery and characterization of carbon nanotubes. This event sent those in attendance and others downwind of his presentation into their laboratories to reproduce and push those discoveries forward. Using the same or similar tools as those used by Huffman and Kratschmere, hundreds of researchers further developed the field of nanotube-based nanotechnology.
At present in 2007 the practice of nanotechnology embraces both stochastic approaches (in which, for example, supramolecular chemistry creates waterproof pants) and deterministic approaches wherein single molecules (created by stochastic chemistry) are manipulated on substrate surfaces (created by stochastic deposition methods) by deterministic methods comprising nudging them with STM or AFM probes and causing simple binding or cleavage reactions to occur. The dream of a complex, deterministic molecular nanotechnology remains elusive. Since the mid 1990s, thousands of surface scientists and thin film technocrats have latched on to the nanotechnology bandwagon and redefined their disciplines as nanotechnology. This has caused much confusion in the field and has spawned thousands of "nano"-papers on the peer reviewed literature. Most of these reports are extensions of the more ordinary research done in the parent fields.
For the future, some means has to be found for MNT design evolution at the nanoscale which mimics the process of biological evolution at the molecular scale. Biological evolution proceeds by random variation in ensemble averages of organisms combined with culling of the less-successful variants and reproduction of the more-successful variants, and macroscale engineering design also proceeds by a process of design evolution from simplicity to complexity as set forth somewhat satirically by John Gall: "A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works." A breakthrough in MNT is needed which proceeds from the simple atomic ensembles which can be built with, e.g., an STM to complex MNT systems via a process of design evolution. A handicap in this process is the difficulty of seeing and manipulation at the nanoscale compared to the macroscale which makes deterministic selection of successful trials difficult; in contrast biological evolution proceeds via action of what Richard Dawkins has called the "blind watchmaker" comprising random molecular variation and deterministic reproduction/extinction.



Thanks to wikipedia

Sunday, April 6, 2008

Microprocessor


The history of the micro from the vacuum tube to today's dual-core multithreaded madness
Level: Introductory
W. Warner, Author, Freelance
22 Dec 2004
The evolution of the modern microprocessor is one of many surprising twists and turns. Who invented the first micro? Who had the first 32-bit single-chip design? You might be surprised at the answers. This article shows the defining decisions that brought the contemporary microprocessor to its present-day configuration.
At the dawn of the 19th century, Benjamin Franklin's discovery of the principles of electricity were still fairly new, and practical applications of his discoveries were few -- the most notable exception being the lightning rod, which was invented independently by two different people in two different places. Independent contemporaneous (and not so contemporaneous) discovery would remain a recurring theme in electronics.
So it was with the invention of the vacuum tube -- invented by Fleming, who was investigating the Effect named for and discovered by Edison; it was refined four years later by de Forest (but is now rumored to have been invented 20 years prior by Tesla). So it was with the transistor: Shockley, Brattain and Bardeen were awarded the Nobel Prize for turning de Forest's triode into a solid state device -- but they were not awarded a patent, because of 20-year-prior art by Lilienfeld. So it was with the integrated circuit (or IC) for which Jack Kilby was awarded a Nobel Prize, but which was contemporaneously developed by Robert Noyce of Fairchild Semiconductor (who got the patent). And so it was, indeed, with the microprocessor.
Before the flood: The 1960s
Just a scant few years after the first laboratory integrated circuits, Fairchild Semiconductor introduced the first commercially available integrated circuit (although at almost the same time as one from Texas Instruments).
Already at the start of the decade, process that would last until the present day was available: commercial ICs made in the planar process were available from both Fairchild Semiconductor and Texas Instruments by 1961, and TTL (transistor-transistor logic) circuits appeared commercially in 1962. By 1968, CMOS (complementary metal oxide semiconductor) hit the market. There is no doubt but that technology, design, and process were rapidly evolving.
Observing this trend, Fairchild Semiconductor's director of Research & Development Gordon Moore observed in 1965 that the density of elements in ICs was doubling annually, and predicted that the trend would continue for the next ten years. With certain amendments, this came to be known as Moore's Law.
The first ICs contained just a few transistors per wafer; by the dawn of the 1970s, production techniques allowed for thousands of transistors per wafer. It was only a matter of time before someone would use this capacity to put an entire computer on a chip, and several someones, indeed, did just that.
Development explosion: The 1970s
The idea of a computer on a single chip had been described in the literature as far back as 1952 , and more articles like this began to appear as the 1970s dawned. Finally, process had caught up to thinking, and the computer on a chip was made possible. The air was electric with the possibility.
Once the feat had been established, the rest of the decade saw a proliferation of companies old and new getting into the semiconductor business, as well as the first personal computers, the first arcade games, and even the first home video game systems -- thus spreading consumer contact with electronics, and paving the way for continued rapid growth in the 1980s.
At the beginning of the 1970s, microprocessors had not yet been introduced. By the end of the decade, a saturated market led to price wars, and many processors were already 16-bit.
The first three
At the time of this writing, three groups lay claim for having been the first to put a computer in a chip: The Central Air Data Computer (CADC), the Intel® 4004, and the Texas Instruments TMS 1000.
The CADC system was completed for the Navy's "TomCat" fighter jets in 1970. It is often discounted because it was a chip set and not a CPU. The TI TMS 1000 was first to market in calculator form, but not in stand-alone form -- that distinction goes to the Intel 4004, which is just one of the reasons it is often cited as the first (incidentally, it too was just one in a chipset of four).
In truth, it does not matter who was first. As with the lightning rod, the light bulb, radio -- and so many other innovations before and after -- it suffices to say it was in the aether, it was inevitable, its time was come.
Where are they now?CADC spent 20 years in top-secret, cold-war-era mothballs until finally being declassified in 1998. Thus, even if it was the first, it has remained under most people's radar even today, and did not have a chance to influence other early microprocessor design.
The Intel 4004 had a short and mostly uneventful history, to be superseded by the 8008 and other early Intel chips (see below).
In 1973, Texas Instrument's Gary Boone was awarded U.S. Patent No. 3,757,306 for the single-chip microprocessor architecture. The chip was finally marketed in stand-alone form in 1974, for the low, low (bulk) price of US$2 apiece. In 1978, a special version of the TI TMS 1000 was the brains of the educational "Speak and Spell" toy which E.T. jerry-rigged to phone home.
Early Intel: 4004, 8008, and 8080
Intel released its single 4-bit all-purpose chip, the Intel 4004, in November 1971. It had a clock speed of 108KHz and 2,300 transistors with ports for ROM, RAM, and I/O. Originally designed for use in a calculator, Intel had to renegotiate its contract to be able to market it as a stand-alone processor. Its ISA had been inspired by the DEC PDP-8.
The Intel 8008 was introduced in April 1972, and didn't make much of a splash, being more or less an 8-bit 4004. Its primary claim to fame is that its ISA -- provided by Computer Terminal Corporation (CTC), who had commissioned the chip -- was to form the basis for the 8080, as well as for the later 8086 (and hence the x86) architecture. Lesser-known Intels from this time include the nearly forgotten 4040, which added logical and compare instructions to the 4004, and the ill-fated 32-bit Intel 432.
Intel put itself back on the map with the 8080, which used the same instruction set as the earlier 8008 and is generally considered to be the first truly usable microprocessor. The 8080 had a 16-bit address bus and an 8-bit data bus, a 16-bit stack pointer to memory which replaced the 8-level internal stack of the 8008, and a 16-bit program counter. It also contained 256 I/O ports, so I/O devices could be connected without taking away or interfering with the addressing space. It also possessed a signal pin that allowed the stack to occupy a separate bank of memory. These features are what made this a truly modern microprocessor. It was used in the Altair 8800, one of the first renowned personal computers (other claimants to that title include the 1963 MIT Lincoln Labs' 12-bit LINC/Laboratory Instruments Computer built with DEC components and DEC's own 1965 PDP-8).
Although the 4004 had been the company's first, it was really the 8080 that clinched its future -- this was immediately apparent, and in fact in 1974 the company changed its phone number so that the last four digits would be 8080.
Where is Intel now?Last time we checked, Intel was still around.



RCA 1802
In 1974, RCA released the 1802 8-bit processor with a different architecture than other 8-bit processors. It had a register file of 16 registers of 16 bits each and using the SEP instruction, you could select any of the registers to be the program counter. Using the SEP instruction, you could choose any of the registers to be the index register. It did not have standard subroutine CALL immediate and RET instructions, though they could be emulated.
A few commonly used subroutines could be called quickly by keeping their address in one of the 16 registers. Before a subroutine returned, it jumped to the location immediately preceding its entry point so that after the RET instruction returned control to the caller, the register would be pointing to the right value for next time. An interesting variation was to have two or more subroutines in a ring so that they were called in round-robin order.
The RCA 1802 is considered one of the first RISC chips although others.
Where is it now?Sadly, the RCA chip was a spectacular market failure due to its slow clock cycle speed. But it could be fabricated to be radiation resistant, so it was used on the Voyager 1, Viking, and Galileo space probes (where rapidly executed commands aren't a necessity).



IBM 801
In 1975, IBM® produced some of the earliest efforts to build a microprocessor based on RISC design principles (although it wasn't called RISC yet -- see the sidebar, The evolution of RISC). Initially a research effort led by John Cocke (the father of RISC), many say that the IBM 801 was named after the address of the building where the chip was designed -- but we suspect that the IBM systems already numbered 601 and 701 had at least something to do with it also.
Where is the 801 now?The 801 chip family never saw mainstream use, and was primarily used in other IBM hardware. Even though the 801 never went far, it did inspire further work which would converge, fifteen years later, to produce the Power Architecture™ family.



The evolution of RISC
RISC stands for "Reduced Instruction Set Computing" or, in a more humorous vein, for "Relegate the Important Stuff to the Compiler," and is also known as load-store architectures.
In the 1970s, research at IBM produced the surprising result that some operations were actually slower than a number of smaller operations doing the same thing. A famous example of this was the VAX's INDEX instruction which ran slower than a loop implementing the same code.
RISC started being adopted in a big way during the 1980s, but many projects embodied its design ethic even before that. One notable example is Seymour Cray's 1964 CDC 6600 supercomputer, which sported a design that included a load-store architecture with two addressing modes and plenty of pipelines for arithmetic and logic tasks (more pipelines are necessary when you're shuttling task instructions in and out of the CPU in a parallel manner as opposed to a linear way). Most RISC machines possess only about five simple addressing modes -- the fewer addressing modes, the more reduced the instruction set (the IBM System 360 had only three modes). Pipelined CPUs are also easier to design if simpler addressing modes are used.
Moto 6800
In 1975, Motorola introduced the 6800, a chip with 78 instructions and probably the first microprocessor with an index register.
Two things are of significance here. One is the use of the index register which is a processor register (a small amount of fast computer memory that's used to speed the execution of programs by providing quick access to commonly used values). The index register can modify operand addresses during the run of a program, typically while doing vector/array operations. Before the invention of index registers and without indirect addressing, array operations had to be performed either by linearly repeating program code for each array element or by using self-modifying code techniques. Both of these methods harbor significant disadvantages when it comes to program flexibility and maintenance and more importantly, they are wasteful when it comes to using up scarce computer memory.
Where is the 6800 now?Many Motorola stand-alone processors and microcontrollers trace their lineage to the 6800, including the popular and powerful 6809 of 1979
MOS 6502
Soon after Motorola released the 6800, the company's design team quit en masse and formed their own company, MOS Technology. They quickly developed the MOS 6501, a completely new design that was nevertheless pin-compatible with the 6800. Motorola sued, and MOS agreed to halt production. The company then released the MOS 6502, which differed from the 6501 only in the pin-out arrangement.
The MOS 6502 was released in September 1975, and it sold for US$25 per unit. At the time, the Intel 8080 and the Motorola 6800 were selling for US$179. Many people thought this must be some sort of scam. Eventually, Intel and Motorola dropped their prices to US$79. This had the effect of legitimizing the MOS 6502, and they began selling by the hundreds. The 6502 was a staple in the Apple® II and various Commodore and Atari computers.
Where is the MOS 6502 now?


Many of the original MOS 6502 still have loving homes today, in the hands of collectors (or even the original owners) of machines like the Atari 2600 video game console, Apple II family of computers, the first Nintendo Entertainment System, the Commodore 64 -- all of which used the 6502. MOS 6502 processors are still being manufactured today for use in embedded systems.



AMD clones the 8080
Advanced Micro Devices (AMD) was founded in 1969 by Jerry Sanders. Like so many of the people who were influential in the early days of the microprocessor (including the founders of Intel), Sanders came from Fairchild Semiconductor. AMD's business was not the creation of new products; it concentrated on making higher quality versions of existing products under license. For example, all of its products met MILSPEC requirements no matter what the end market was. In 1975, it began selling reverse-engineered clones of the Intel 8080 processor.
Where is AMD now?In the 1980s, first licensing agreements -- and then legal disputes -- with Intel, eventually led to court validation of clean-room reverse engineering and opened the 1990s floodgates to many clone corps.



Fairchild F8
The 8-bit Fairchild F8 (also known as the 3850) microcontroller was Fairchild's first processor. It had no stack pointer, no program counter, no address bus. It did have 64 registers (the first 8 of which could be accessed directly) and 64 bytes of "scratchpad" RAM. The first F8s were multichip designs (usually 2-chip, with the second being ROM). The F8 was released in a single-chip implementation (the Mostek 3870) in 1977.
Where is it now?


The F8 was used in the company's Channel F Fairchild Video Entertainment System in 1976. By the end of the decade, Fairchild played mostly in niche markets, including the "hardened" IC market for military and space applications, and in Cray supercomputers. Fairchild was acquired by National Semiconductor in the 1980s, and spun off again as an independent company in 1997.
16 bits, two contenders
The first multi-chip 16-bit microprocessor was introduced by either Digital Equipment Corporation in its LSI-11 OEM board set and its packaged PDP 11/03 minicomputer, or by Fairchild Semiconductor with its MicroFlame 9440, both released in 1975. The first single-chip 16-bit microprocessor was the 1976 TI TMS 9900, which was also compatible with the TI 990 line of minicomputers and was used in the TM 990 line of OEM microcomputer boards.
Where are they now?


The DEC chipset later gave way to the 32-bit DEC VAX product line, which was replaced by the Alpha family, which was discontinued in 2004.
The aptly named Fairchild MicroFlame ran hot and was never chosen by a major computer manufacturer, so it faded out of existence.
The TI TMS 9900 had a strong beginning, but was packaged in a large (for the time) ceramic 64-pin package which pushed the cost out of range compared with the much cheaper 8-bit Intel 8080 and 8085. In March 1982, TI decided to start ramping down TMS 9900 production, and go into the DSP business instead. TI is still in the chip business today, and in 2004 it came out with a nifty TV tuner chip for cell phones.
Zilog Z-80
Probably the most popular microprocessor of all time, the Zilog Z-80 was designed by Frederico Faggin after he left Intel, and it was released in July 1976. Faggin had designed or led the design teams for all of Intel's early processors: the 4004, the 8008, and particularly, the revolutionary 8080.
Silicon Valley lineage
It is interesting to note that Federico Faggin defected from Intel to form his own company; meanwhile Intel had been founded by defectors from Fairchild Semiconductor, which had itself been founded by defectors from Shockley Semiconductor Laboratory, which had been founded by William Shockley, who had defected from AT&T Bell Labs -- where he had been one of the co-inventors of the first transistor.
As an aside, Federico Faggin had also been employed at Fairchild Semiconductor before leaving to join Intel.
This 8-bit microprocessor was binary compatible with the 8080 and surprisingly, is still in widespread use today in many embedded applications. Faggin intended it to be an improved version of the 8080 and according to popular opinion, it was. It could execute all of the 8080 operating codes as well as 80 more instructions (including 1-, 4-, 8-, and 16-bit operations, block I/O, block move, and so on). Because it contained two sets of switchable data registers, it supported fast operating system or interrupt context switches.
The thing that really made it popular though, was its memory interface. Since the CPU generated its own RAM refresh signals, it provided lower system costs and made it easier to design a system around. When coupled with its 8080 compatibility and its support for the first standardized microprocessor operating system CP/M, the cost and enhanced capabilities made this the choice chip for many designers (including TI; it was the brains of the TRS-80 Model 1).
The Z-80 featured many undocumented instructions that were in some cases a by-product of early designs (which did not trap invalid op codes, but tried to interpret them as best they could); in other cases the chip area near the edge was used for added instructions, but fabrication methods of the day made the failure rate high. Instructions that often failed were just not documented, so the chip yield could be increased. Later fabrication made these more reliable.
Where are they now?In 1979, Zilog announced the 16-bit Z8000. Sporting another great design with a stack pointer and both a user and a supervisor mode, this chip never really took off. The main reason: Zilog was a small company, it struggled with support, and never managed to bank enough to stay around and outlast the competition.
However, Zilog is not only still making microcontrollers, it is still making Z-80 microcontrollers. In all, more than one billion Z-80s have been made over the years -- a proud testament to Faggin's superb design.
Faggin is currently Chairman of the Board & Co-Founder of Synaptics, a "user interface solutions" company in the Silicon Valley.
Intel 8085 and 8086
In 1976, Intel updated the 8080 design with the 8085 by adding two instructions to enable/disable three added interrupt pins (and the serial I/O pins). They also simplified hardware so that it used only +5V power, and added clock-generator and bus-controller circuits on the chip. It was binary compatible with the 8080, but required less supporting hardware, allowing simpler and less expensive microcomputer systems to be built. These were the first Intel chips to be produced without input from Faggin.
In 1978, Intel introduced the 8086, a 16-bit processor which gave rise to the x86 architecture. It did not contain floating-point instructions. In 1980 the company released the 8087, the first math co-processor they'd developed.
Next came the 8088, the processor for the first IBM PC. Even though IBM engineers at the time wanted to use the Motorola 68000 in the PC, the company already had the rights to produce the 8086 line (by trading rights to Intel for its bubble memory) and it could use modified 8085-type components (and 68000-style components were much more scarce).
Moto 68000
In 1979, Motorola introduced the 68000. With internal 32-bit registers and a 32-bit address space, its bus was still 16 bits due to hardware prices. Originally designed for embedded applications, its DEC PDP-11 and VAX-inspired design meant that it eventually found its way into the Apple Macintosh, Amiga, Atari, and even the original Sun Microsystems® and Silicon Graphics computers.
Where is the 68000 now?As the 68000 was reaching the end of its life, Motorola entered into the Apple-IBM-Motorola "AIM" alliance which would eventually produce the first PowerPC® chips. Motorola ceased production of the 68000 in 2000.
The dawning of the age of RISC: The 1980s
Advances in process ushered in the "more is more" era of VLSI, leading to true 32-bit architectures. At the same time, the "less is more" RISC philosophy allowed for greater performance. When combined, VLSI and RISC produced chips with awesome capabilities, giving rise to the UNIX® workstation market.
The decade opened with intriguing contemporaneous independent projects at Berkeley and Stanford -- RISC and MIPS. Even with the new RISC families, an industry shakeout commonly referred to as "the microprocessor wars," would mean that we left the 1980s with fewer major micro manufacturers than we had coming in.
By the end of the decade, prices had dropped substantially, so that record numbers of households and schools had access to more computers than ever before.
RISC and MIPS and POWER
RISC, too, started in many places at once, and was antedated by some of the examples already cited (see the sidebar, The evolution of RISC).
Berkeley RISCIn 1980, the University of California at Berkeley started something it called the RISC Project (in fact, the professors leading the project, David Patterson and Carlo H. Sequin, are credited with coining the term "RISC").
The project emphasized pipelining and the use of register windows: by 1982, they had delivered their first processor, called the RISC-I. With only 44KB transistors (compared with about 100KB in most contemporary processors) and only 32 instructions, it outperformed any other single chip design in existence.
MIPSMeanwhile, in 1981, and just across the San Francisco Bay from Berkeley, John Hennessy and a team at Stanford University started building what would become the first MIPS processor. They wanted to use deep instruction pipelines -- a difficult-to-implement practice -- to increase performance. A major obstacle to pipelining was that it required the hard-to-set-up interlocks in place to ascertain that multiple-clock-cycle instructions would stop the pipeline from loading data until the instruction was completed. The MIPS design settled on a relatively simple demand to eliminate interlocking -- all instructions must take only one clock cycle. This was a potentially useful alteration in the RISC philosophy.
POWERAlso contemporaneously and independently, IBM continued to work on RISC as well. 1974's 801 project turned into Project America and Project Cheetah. Project Cheetah would become the first workstation to use a RISC chip, in 1986: the PC/RT, which used the 801-inspired ROMP chip.
Where are they now?


By 1983, the RISC Project at Berkeley had produced the RISC-II which contained 39 instructions and ran more than 3 times as fast as the RISC-I. Sun Microsystem's SPARC (Scalable Processor ARChitecture) chip design is heavily influenced by the minimalist RISC Project designs of the RISC-I and -II.
Professors Patterson and Sequin are both still at Berkeley.
MIPS was used in Silicon Graphics workstations for years. Although SGI's newest offerings now use Intel processors, MIPS is very popular in embedded applications.
Professor Hennessy left Stanford in 1984 to form MIPS Computers. The company's commercial 32-bit designs implemented the interlocks in hardware. MIPS was purchased by Silicon Graphics, Inc. in 1992, and was spun off as MIPS Technologies, Inc. in 1998. John Hennessy is currently Stanford University's tenth President.
IBM's Cheetah project, which developed into the PC-RT's ROMP, was a bit of a flop, but Project America was in prototype by 1985 and would, in 1990, become RISC System/6000. Its processor would be renamed the POWER1.
RISC was quickly adopted in the industry, and today remains the most popular architecture for processors. During the 1980s, several additional RISC families were launched. Aside from those already mentioned above were:
CRISP (C Reduced Instruction Set Processor) from AT&T Bell Labs.
The Motorola 88000 family.
Digital Equipment Corporation Alpha's (the world's first single-chip 64-bit microprocessor).
HP Precision Architecture (HP PA-RISC).
32-bitness
The early 1980s also saw the first 32-bit chips arrive in droves.
BELLMAC-32AAT&T's Computer Systems division opened its doors in 1980, and by 1981 it had introduced the world's first single-chip 32-bit microprocessor, the AT&T Bell Labs' BELLMAC-32A, (it was renamed the WE 32000 after the break-up in 1984). There were two subsequent generations, the WE 32100 and WE 32200, which were used in:
the 3B5 and 3B15 minicomputers
the 3B2, the world's first desktop supermicrocomputer
the "Companion", the world's first 32-bit laptop computer
"Alexander", the world's first book-sized supermicrocomputer
All ran the original Bell Labs UNIX.
Motorola 68010 (and friends)Motorola had already introduced the MC 68000, which had a 32-bit architecture internally, but a 16-bit pinout externally. It introduced its pure 32-bit microprocessors, the MC 68010, 68012, and 68020 by 1985 or thereabouts, and began to work on a 32-bit family of RISC processors, named 88000.
NS 32032In 1983, National Semiconductor introduced a 16-bit pinout, 32-bit internal microprocessor called the NS 16032, the full 32-bit NS 32032, and a line of 32-bit industrial OEM microcomputers. Sequent also introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032.
Intel entered the 32-bit world in 1981, same as the AT&T BELLMAC chips, with the ill-fated 432. It was a three-chip design rather than a single-chip implementation, and it didn't go anywhere. In 1986, its 32-bit i386 became its first single-chip 32-bit offering, closely followed by the 486 in 1989.


Where are they now?


AT&T closed its Computer Systems division in December, 1995. The company shifted to MIPS and Intel chips.
Sequent's SMP machine faded away, and that company also switched to Intel microprocessors.
The Motorola 88000 design wasn't commercially available until 1990, and was cancelled soon after in favor of Motorola's deal with IBM and Apple to create the first PowerPC.
ARM is born
In 1983, Acorn Computers Ltd. was looking for a processor. Some say that Acorn was refused access to Intel's upcoming 80286 chip, others say that Acorn rejected both the Intel 286 and the Motorola MC 68000 as being not powerful enough. In any case, the company decided to develop its own processor called the Acorn RISC Machine, or ARM. The company had development samples, known as the ARM I by 1985; production models (ARM II) were ready by the following year. The original ARM chip contained only 30,000 transistors.
Where are they now?Acorn Computers was taken over by Olivetti in 1985, and after a few more shakeups, was purchased by Broadcom in 2000.
However, the company's ARM architecture today accounts for approximately 75% of all 32-bit embedded processors. The most successful implementation has been the ARM7TDMI with hundreds of millions sold in cellular phones. The Digital/ARM combo StrongARM is the basis for the Intel XScale processor.
A new hope: The 1990s
The 1990s dawned just a few months after most of the Communist governments of Eastern and Central Europe had rolled over and played dead; by 1991, the Cold War was officially at an end. Those high-end UNIX workstation vendors who were left standing after the "microprocessor wars" scrambled to find new, non-military markets for their wares. Luckily, the commercialization and broad adoption of the Internet in the 1990s neatly stepped in to fill the gap. For at the beginning of that decade, you couldn't run an Internet server or even properly connect to the Internet on anything but UNIX. A side effect of this was that a large number of new people were introduced to the open-standards Free Software that ran the Internet.
The popularization of the Internet led to higher desktop sales as well, fueling growth in that sector. Throughout the 1990s, desktop chipmakers participated in a mad speed race to keep up with "Moore's Law" -- often neglecting other areas of their chips' architecture to pursue elusive clock rate milestones.
32-bitness, so coveted in the 1980s, gave way to 64-bitness. The first high-end UNIX processors would blazon the 64-bit trail at the very start of the 1990s, and by the time of this writing, most desktop systems had joined them. The POWER™ and PowerPC family, introduced in 1990, had a 64-bit ISA from the beginning.
Power Architecture
IBM introduced the POWER architecture -- a multichip RISC design -- in early 1990. By the next year, the first single-chip PowerPC derivatives (the product of the Apple-IBM-Motorola AIM alliance) were available as a high-volume alternative to the predominating CISC desktop structure.
Where is Power Architecture technology now?Power Architecture technology is popular in all markets, from the high-end UNIX eServer™ to embedded systems. When used on the desktop, it is often known as the Apple G5. The cooperative climate of the original AIM alliance has been expanded into an organization by name of Power.org.
DEC Alpha
In 1992, DEC introduced the Alpha 21064 at a speed of 200MHz. The superscalar, superpipelined 64-bit processor design was pure RISC, but it outperformed the other chips and was referred to by DEC as the world's fastest processor. (When the Pentium was launched the next spring, it only ran at 66MHz.) The Alpha too was intended to be used in both UNIX server/workstations as well as desktop variants.
The primary contribution of the Alpha design to microprocessor history was not in its architecture -- that was pure RISC. The Alpha's performance was due to excellent implementation. The microchip design process is dominated by automated logic synthesis flows. To deal with the extremely complex VAX architecture, Digital designers applied human, individually crafted attention to circuit design. When this was applied to a simple, clean architecture like the RISC-based Alpha, the combination gleaned the highest possible performance.
Where is Alpha now?


Sadly, the very thing that led Alpha down the primrose path -- hand-tuned circuits -- would prove to be its undoing. As DEC was going out of business, , its chip division, Digital Semiconductor, was sold to Intel as part of a legal settlement. Intel used the StrongARM (a joint project of DEC and ARM) to replace its i860 and i960 line of RISC processors.
The Clone Wars begin
In March 1991, Advanced Micro Devices (AMD) introduced its clone of Intel's i386DX. It ran at clock speeds of up to 40MHz. This set a precedent for AMD -- its goal was not just cheaper chips that would run code intended for Intel-based systems, but chips that would also outperform the competition. AMD chips are RISC designs internally; they convert the Intel instructions to appropriate internal operations before execution.
Also in 1991, litigation between AMD and Intel was finally settled in favor of AMD, leading to a flood of clonemakers -- among them, Cyrix, NexGen, and others -- few of which would survive into the next decade.
In the desktop space, Moore's Law turned into a Sisyphean treadmill as makers chased elusive clock speed milestones.
Where are they now?


Well, of course, AMD is still standing. In fact, its latest designs are being cloned by Intel!
Cyrix was acquired by National Semiconductor in 1997, and sold to VIA in 1999. The acquisition turned VIA into a processor player, where it had mainly offered core logic chipsets before. The company today specializes in high-performance, low-power chips for the mobile market.
CISC
CISC was a retroactive term. It was coined and applied to processors after the fact, in order to distinguish traditional CPUs from the new RISC designs. Then in 1993, Intel introduced the Pentium, which was a pipelined, in-order superscalar architecture. It was also backwards-compatible with the older x86 architecture and was thus almost a "hybrid" chip -- a blend of RISC and CISC design ideas. Later, the Pentium Pro included out-of-order code execution and branch prediction logic, another typically RISC concept.

Where are we now?


The 2000s
The 2000s have come along and it's too early yet to say what will have happened by decade's end. As Federico Faggin said, the exponential progression of Moore's law cannot continue forever. As the day nears when process will be measured in Angstroms instead of nanometers, researchers are furiously experimenting with layout, materials, concepts, and process. After all, today's microprocessors are based on the same architecture and processes that were first invented 30 years ago -- something has definitely got to give.
We are not at the end of the decade yet, but from where we sit at its mid-way point, the major players are few, and can easily be arranged on a pretty small scorecard:
In high-end UNIX, DEC has phased out Alpha, SGI uses Intel, and Sun is planning to outsource production of SPARC to Fujitsu (IBM continues to make its own chips). RISC is still king, but its MIPS and ARM variants are found mostly in embedded systems.
In 64-bit desktop computing, the DEC Alpha is being phased out, and HP just ended its Itanium alliance with Intel. The AMD 64 (and its clones) and the IBM PowerPC are the major players, while in the desktop arena as a whole, Intel, AMD, and VIA make x86-compatible processors along RISC lines.
As for 2005 and beyond, the second half of the decade is sure to bring as many surprises as the first. Maybe you have ideas as to what they .
The history of microprocessors is a robust topic -- this article hasn't covered everything, and we apologize for any omissions.


About author
Wade Warner built his first computer in 1993. He began working as a systems administrator in 1995.

Hydrogen-powered planes on the way

Boeing tests first hydroge-powered plane and Hydrogen powered plane takes off are a few of the many headlines written in the last 24 hours about Boeing's first successful flight of a human-piloted, hydrogen-powered aircraft. I was one of about 30 journalists at the press conference held at the OcaƱa airfield near Madrid in Spain, where the announcement was made.
Boeing's spin machine was definitely laying on the eco-credentials thick and fast, wheeling out John Tracy, Boeing's Chief Technology Officer, to hammer the message home. He told the BBC it was "a historical technological success… full of promises for a greener future".
Well, yes, and no. And from where I was sitting, the future didn’t seem quite so mint-tinted.For a start, the wind was blowing in the wrong direction, a factor the experimental aircraft could not handle, meaning the demo flight had to be cancelled.

Health with Coffee


The latest research has not only confirmed that moderate coffee consumption doesn't cause harm, it's also uncovered possible benefits. Studies show that the risk for type 2 diabetes is lower among regular coffee drinkers than among those who don't drink it. Also, coffee may reduce the risk of developing gallstones, discourage the development of colon cancer, improve cognitive function, reduce the risk of liver damage in people at high risk for liver disease, and reduce the risk of Parkinson's disease. Coffee has also been shown to improve endurance performance in long-duration physical activities