After Effects & Performance. Part 10: The birth of the GPU


Introducing the GPU.

 Over the past ten years the GPU, or Graphics Processing Unit, has probably made greater advances than any other system component.  The GPU is no longer just a “graphics card”, but a powerful co-processing device that can dramatically improve the performance of an increasingly wide range of software.

In Part 5 we introduced the CPU, and in this article we’re introducing the GPU.  Like part 5, there are few direct references to After Effects itself, and once again a lot of this is historical trivia.  In fact this article actually finishes at the point where the term “GPU” was coined, so this isn’t so much of an introduction to the GPU, but a foreword, or an introduction up to the point the first GPU was launched.   You’ll have to wait for the next part before we focus on exactly how the GPU has affected professional graphics software, however it’s interesting to understand where it all started. Besides, everyone loves an origin story.

After Effects is a complex piece of graphics software and so understanding the relationship between After Effects and the GPU is critical to understanding performance.  The GPU has had an enormous impact on 3D workflows and so it’s only natural to look at After Effects and wonder what’s going on.

Beyond the simple question “what graphics card should I buy”, the GPU has been the driving force behind an emerging and separate industry – real-time 3D graphics.  While non-real time renderers such as Redshift and Octane are taking existing workflows and improving performance with GPUs, real-time 3D engines have the potential to completely revolutionise the entire production process.

Recently, the behind-the-scenes video for “The Mandelorian” was widely shared, with well over 700,000 views to date.  The video demonstrates a new production technology called “stagecraft”, where LED screens are combined with the Unreal 3D engine to create virtual backdrops, eliminating the traditional role of chromakey.  This is a new way of making television shows, made possible by the advances in GPU and real-time technology.  It’s a high-profile example of how GPU’s have had a direct impact; not just on what software is used in post-production, but how film and TV shows are actually made.

As someone who recently posted a five-part series on chromakey, I have to face the possibility that in another ten or twenty years time, chromakey itself as a production technique may be obsolete – thanks to the GPU.

During the 1980s, as the first graphics cards were released, they were often defined simply by how many colours they could display.  In comparison, a modern GPU is almost unrecognizable in complexity and power.  The story of how GPUs developed is interesting and surprising – and consistently linked to computer games. The modern GPU is the product of two separate worlds. On one side there’s an elite, exclusive, high-end supercomputing pedigree and on the other, computer games played by everyday people on cheap hardware.

In the beginning there was darkness

The popular arcade game “Asteroids” is a great example of an analogue, vector based display,

Very early computers didn’t have GPUs because they didn’t have graphics, or screens to display them on.  Many early examples of computer displays were based on analogue vector graphics, with the game “asteroids” being a very famous example. These early vector displays had a unique look, and were fine for monochrome text and lines, but they weren’t suitable for the types of general-purpose computer graphics we know today.

Although many researchers had worked on the concept of digital graphics, the cost of producing anything was prohibitive until the early 1970s.

The technically “normal” way to design a video chip for a computer is to use a frame buffer.  The idea is that the image on the screen is stored in an area of dedicated memory, with one or more bits of memory used to represent each pixel of the screen.  This way, the image on the screen is a direct representation of the bits stored in memory. If this sounds familiar then it’s because it’s basically the same as how any other image is stored.  In parts 2 and 3 of this series we looked at exactly what After Effects does (combines bitmap images), and how those images are stored in memory.  A frame buffer is just a special area of memory that gets converted into a video signal.  The video chip scans the memory many times a second and converts the bits to an output signal that can be displayed on a monitor.

This was fine in theory, but there was a problem.  As we saw in part 3, images can take up a lot of memory, and in the 1970s memory was phenomenally expensive.  Even as desktop computers emerged during the 1970s, the high cost of memory limited the commercial appeal of computer graphics.  Researchers who wanted a frame buffer generally had to build their own, before the first commercial product was released in 1975. The “Picture System” had a resolution of 512 x 512 greyscale pixels, and it cost $15,000.

These days we’re so used to expensive and powerful GPUs that it’s difficult to image a time when the CPU was actually driving the display.  In the 1970s everyday TVs worked via analogue RF signals (that’s what PAL and NTSC actually referred to: analogue RF signals) and a few tech companies had products that converted timing pulses from a CPU into a TV signal.  These weren’t “graphics chips” by any means, just a circuit that took a digital signal and output an analogue one, albeit one that you could see if you plugged in a TV.

Racing the beam

The iconic Atari 2600, which went on to sell over 30 million units.

Around the same time, Atari were struggling to follow on from their success with the arcade game “Pong”. Pong had been a hit, but like all arcade games at that time it ran on unique, custom hardware that was designed and built solely to play Pong. Developing arcade machines that only played one game was expensive and risky, so Atari were looking to create a low cost games console for the home, which could be used to play many different games.  Atari named the project the “VCS 2600”, but they faced a very real problem in sourcing components that were cheap enough to make it feasible.  An arcade game that only played a single game would be built around custom display hardware. But if a console was being designed to play many different games on the same display (ie a TV set) then a more universal graphics system was needed. But the theoretical solution – a frame buffer – was just too expensive.

Wikipedia sums up the problem succinctly:

“At the time the 2600 was being designed, RAM was extremely expensive, costing tens of thousands of dollars per megabyte. A typical 320 by 200 pixel display with even a single bit per pixel would require 8000 bytes of memory to store the framebuffer.[3] This would not be suitable for a platform that aimed to cost only a few hundred dollars. Even dramatic reductions in the resolution would not reduce the cost of memory to reasonable levels. Instead, the design team decided to remove the memory-based framebuffer entirely.”

Atari’s cost-saving solution was to avoid the need for a full frame buffer by only using enough RAM for a single line of the screen at a time.  The CPU would be used to process the RAM needed for each scan line just in time for it to be drawn on the screen.  There was no frame buffer where the entire image on the screen was stored.  Instead, instructions from the CPU were converted to an analogue video signal in real-time. It’s worth noting that the chip which made this possible was designed by an Atari engineer named Jay Miner, who pops up later on.

“Racing the Beam” gives a great insight into how crazy difficult it was to develop games for the Atari 2600, because it lacked the memory for a conventional frame buffer.

The Atari 2600 had a resolution that was roughly equivalent to 160 x 192 pixels, and so to draw a single frame, the CPU had to stop what it was doing 192 times and process the RAM for the next line of the screen, before starting over again for the next frame.  To some extent, it’s correct to say that the Atari 2600 doesn’t have a graphics chip, at least not in the conventional digital sense. One engineer described it as being more like a video circuit that was constantly tickled by the CPU.

Although this saved on the cost of memory, it made programming the 2600 extremely difficult. As the CPU spent most of its time drawing the screen, there wasn’t a lot of processing power left over to actually run the games.  The full details can be found in the excellent book “Racing the beam”, which gives a great history into the Atari 2600 and the technical challenges it posed.

MOS , Al, Bob and VIC

The Apple II, launched a few months earlier than the Atari 2600, was the first personal computer to feature colour graphics.  Like the Atari 2600, the CPU was used to drive the display, and the output was an RF television signal.

The Atari 2600 and the Apple II both used CPUs made by MOS technology, a microprocessor design and manufacturing company which in turn was owned by Commodore.  The head of the chip department, Al Charpentier, was happy at the success their CPUs had found with Atari and the Apple II, but was disappointed at the way the graphics had been implemented. He decided that MOS would design a dedicated graphics chip, as a more technically advanced alternative to the simple digital-analogue circuits that were being used by Atari and Apple.  They named the project VIC, short for Video Interface Chip.

After a few failed prototypes, MOS technology produced their VIC chip – all they had to do was go out and find people to buy it.  Unfortunately, no-one seemed interested.  Although they had originally hoped that Atari would use it in a follow-up to the 2600, Atari had their own engineer – the very same Jay Miner – working on their own custom video chips. These were a huge leap forward from the primitive circuits used in the Atari 2600, and would be included in Atari’s newer games consoles as well as a line of desktop computers.

By the end of the 1970’s personal computers were only just beginning to take off, and they were aimed at the small business market. By the time you purchased a computer, monitor, disk drives and a printer you could be looking at several thousand dollars. The main priority was to run business software such as VisiCalc.  To these people, colour graphics weren’t a priority, it was more important that they could display at least 40 columns of text – if not 80 – even if it was monochrome.  The main selling point of the VIC chip was that it could produce composite video in colour, with the trade-off being that it only displayed 22 columns of very chunky text. Despite Commodore doing well with their “PET” line of computers, the VIC chip’s chunky text made it unsuitable for any business-orientated computer.

Colour AND movement

The only person who seemed genuinely excited by the VIC chip was a young engineering student called Bob Yannes, who’d used the VIC chip as the basis for his senior project. He graduated and was employed by Commodore, and one of his jobs was to try and sell the VIC chip to potential customers. Yannes assembled a few basic components and spare parts together as a technical demo, literally in his own bedroom, which demonstrated the VIC chip’s graphics capabilities. It wasn’t a complete computer – it had no operating system, it couldn’t load programs, its entire purpose was to run a one-off technical demo. But it was small, it had colour and a keyboard, and it plugged into a TV.

If a British developer called Clive Sinclair hadn’t formed his own computer company, then that would probably be the end of the story. The VIC chip would have failed to find a customer, and eventually it would have faded from history. But he did, and in 1980 Sinclair released the ZX-80. What separated the ZX-80 from all the other recognizable computers of the same time – the Commodore PET, the Apple II and the TRS 80 – was the price. While an Apple II with disk drives could cost thousands, the ZX-80 was only $200.  The ZX-80 was small, had no sound, it could only display monochrome text, and the screen flashed black every time you pressed a key.  The keyboard was awful and overall the ZX-80 was a bit crap.  But still, at that price point Sinclair had opened up a new market with no competition.

Commodore didn’t know there was a market for cheap, slightly crap computers, and so they were suddenly looking for a product to compete with the ZX-80, and Bob Yannes’ technical demo – built from spare parts in his bedroom – fit the bill. In only a few months it was dramatically transformed into a production model – the Commodore Vic 20.  Thanks to the VIC chip, Commodore had a home computer that could compete with Sinclair on price, but with full colour graphics and sound.  It was a hit.

No, the image isn’t stretched. The VIC chip could display colour graphics at the cost of having very chunky text. But thanks to the ZX-80 (right) it found a home in the Commodore Vic 20 (middle).

The Vic 20 was the first personal computer to sell 1 million units.  Commodore sold more in 6 months than the total number of Apple IIs that had been sold to date.  It seemed that the general public liked cheap, colourful graphics. The success of the Vic 20 lead directly to the Commodore 64, with improved graphics and sound, and the C64 remains the single best selling computer of all time.

While there was a very significant market for more expensive business computers, many of which only displayed monochrome text, what drove Commodore’s success was cheap, colourful graphics on home computers that could play games.

Despite the financial success of both Commodore and Atari, there were all sorts of management problems and internal political struggles. Just as Commodore were the most successful computer company on the planet, most of their development team left to work for Atari, and many of Atari’s team moved to Commodore.

Jay Miner, the engineering wizard behind Atari’s earlier graphics chips, had left Atari several years before all the political drama.  After a break he joined a startup to develop a new games platform and found himself the lead designer. Commodore, now without a development team and with no new products on the horizon, bought the company and turned what could have been a games console into their next line of computers: The Amiga.

Blits and Blobs

The Amiga didn’t rely on the CPU to do all of the graphics processing.  Jay Miner had designed a suite of specialized co-processors that were capable of advanced sound and graphics.  The original Amiga prototype had been called “Lorraine”, and the three special co-processors were called Paula, Agnus and Denise.  While Denise was the main video processor, Agnus had a very special trick that played an important role in the Amiga’s graphics performance: the blitter.  The blitter was shorthand for “block image transfer”, and in simple terms it meant that Agnus could move large blocks of memory around very quickly, without using the CPU. As we’ve seen previously, large blocks of memory are exactly what bitmap images are, and so the “blitter” enabled the Amiga to process images and animation much faster than any computer which relied solely on the CPU. While this was very useful for arcade-style games, it also gave a noticeable performance boost to the user interface. By the time the Amiga was released, Apple had released the first Macintosh and Microsoft were working on the first version of Windows. Keeping track of all the windows on the user’s screen and moving them around with the mouse placed an additional burden on the CPU – unless you were using an Amiga, in which case Agnus did all the hard work instead.

Even better than the real thing

As an example of how powerful the blitter was, the Amiga could emulate Apple software and run it faster than a “real” Mac with the same CPU.  Even though there’s an overhead in emulating a different computer, the Amiga’s custom chipset freed up the CPU from having to process graphics and sound.  This made up for the additional overhead of emulating another operating system. So if you had an Apple Mac running at 10Mhz and an Amiga emulating an Apple Mac, also with a 10mhz CPU – the Amiga was actually faster.

Because the custom chipsets in the Amiga were a fundamental part of the design, Agnus, Denise and their successors can’t be considered “graphics cards” in the sense that we know them now.  They did, however, prove the usefulness of having dedicated co-processors to take the load off the main CPU. Again, the Amiga found success with the computer games market – after all, those custom chips had originally been destined for a games console.

Despite the runaway success of the Commodore 64, a seemingly endless stream of poor management decisions left Commodore unable to compete with the Apple Macintosh and IBM PCs. While Amigas were initially renown for their superior graphics capabilities, Commodore wasn’t able to sustain development of the unique Amiga chipsets, and by the early 1990s 3rd party “graphics cards” for IBM PCs and the Apple Macintosh could display higher resolutions and more colours, even if they lacked the same animation abilities.

The first plug-in “graphics card” was released by IBM in 1984, and actually consisted of 3 separate circuit boards that included an additional CPU. However at over $4,000 it didn’t sell well.

But despite the Amiga’s prominence in the niche field of desktop video, the next notable advance would come in 1991, when S3 released the first PC video card that included 2D acceleration.  The power of the Amiga’s blitter had finally made it to the world of the PC.

While the original “trilogy” of successful home computers were aimed at the small business market, what drove the commercial success of the home computer was cheap, colourful graphics – and games. But while the Vic 20 was the first computer to be sold at K-mart (and in toy shops), the other side of the GPU’s origin story began in a very different world.

Big Boy Toys

The desktop video revolution was a revolution because regular desktop computers were replacing supercomputers that had cost 100x as much.  This revolution is a recurring theme across many of my older articles, so rather than repeat myself I’ll just share some links:

Similar themes were discussed in the popular “Mograph goes through puberty” article, published on the Motionographer website.

But while I’ve written a lot about the desktop side of the desktop video revolution, I haven’t written much about the traditional online suites that were being replaced. The video film and television industry has changed so much over the past 20 years that’s it’s easy to forget what it used to be like. After all, when was the last time you recorded video on tape?

Now we’re firmly in the year 2020, only three different companies design the most common CPUs in use: Intel and AMD for desktop computers, and ARM for mobile devices.  Historically, other companies designed competing CPUs with differing design philosophies, aimed at different markets.  During the 1970s, 1980s and 1990s there were several companies manufacturing high performance CPUs, including Sun, MIPS (owned by Silicon Graphics), DEC, IBM and Motorola.  Each of these companies designed their own CPUs for use in their own computers, and all of these competing platforms were incompatible with each other.

A range of CPUs designed and manufactured by different companies.

During the 1980s and 1990s, the one company that became synonymous with high end video production, graphics and visual effects was Silicon Graphics.  Silicon Graphics, or SGI, successfully created an aura and reverence around their brand that has rarely been equaled since.  Recently, I’ve been working my way through this YouTube channel devoted to old SGI computers, and it’s been fascinating to finally learn actual facts about these mythical machines, which were usually only referred to with hushed whispers and rumors of their million-dollar price tags.  When I posted my original series on the Desktop Video Revolution, one reader sent me a quick email to say that his company had spent $3 million installing and outfitting an Inferno suite. Compare that to the cost of an iMac and a Creative Cloud subscription and you can see why the transition to desktop computers was revolutionary.

(I’ve heard of people watching After Effects tutorials in fast forward, but if you jump to Dodoid’s channel you almost need to slow them down cause that guy can talk!)

The field of 3D computer graphics was undergoing rapid development during the 1980s, although it was mostly being driven by engineering and CAD, not visual FX.  While Silicon Graphics had the highest profile brand, other companies such as Sun, DEC, Hewlett Packard and IBM were also producing high performance “minicomputer” workstations, and developing their own 3D graphics solutions.

The wide variety of different computer hardware and processor types made it difficult for software developers, as they had to write specific versions of their software for each different computer.  This was not only a waste of effort but risked fragmenting the market with multiple competing solutions.  Even today, there are applications that are only available on Windows and not Macs and vice versa, so you can imagine how difficult it was when there were many more platforms than the two we have today.

Opening the doors

It was acknowledged that there needed to be an industry standard programming language for 3D graphics, to make it easier for software companies to support different types of computers.  It simply wasn’t feasible for every software package to be completely re-written for every different CPU and operating system on the market.  The short version is that Silicon Graphics took their existing proprietary language, called Iris GL, and used it as the basis for a new, open standard called Open GL.  Open GL was initially launched around 1991 and it continues to be supported (but not developed) to this day.

Open GL is, in simple terms, a language and not a piece of hardware. (For the purpose of this article, it’s not important whether Open GL is an API or a language. I tend to mix the terms up.)

Open GL provided a common ground for software and hardware developers to work with 3D graphics.  Software developers didn’t need to write different versions of their software for each different computer, they just needed to write one version that used Open GL.  All of the different computer manufacturers could add their own support for Open GL, ensuring compatibility across all platforms.

The primary goal of Open GL was to create a unified tool for developing 3D graphics, and one of its greatest strengths proved to be its documentation.  The target market for Open GL was software developers, at a time when the primary focus of 3D graphics was CAD, engineering and scientific visualization.  Open GL was an agreed standard on how to program 3D graphics, but it didn’t include any specification for how those commands should be executed.  It was up to each individual computing platform to include Open GL support.

Acceleration In Silico

As mentioned in Part 5, one way to build more performance into a microprocessor is to add unique instructions for functions that otherwise need a combination of many different instructions to complete.  By building a chip that can do something “in silicon” massive performance gains can be had.  In part 5 I used the FPU, a floating point unit for decimal maths, as an example.   The “square root” function can be calculated by a FPU much, much faster than a CPU which has to loop through a complex algorithm.

Initially, processing and rendering Open GL commands was done by the computer’s CPU – what we call “software rendering”.  But once the specifications for Open GL were finalized and the standard was launched, there was the potential for hardware manufacturers to design and create custom chips to render Open GL graphics much faster.

Shall we play a game?

During the 80s and early 90s, 3D graphics development was done on expensive “minicomputers” made by the companies already mentioned – SGI, Sun, DEC, IBM and so on.  In 1984, the film “The Last Starfighter” included 27 minutes of 3D computer animation, rendered on a Cray supercomputer.  At the time it was made, the Cray X-MP was the fastest supercomputer in the world, while the best selling desktop computer in the same year was the Commodore 64.  The Cray supercomputer cost about $15 million, while a Commodore 64 cost about $300.

In 1984 the most powerful computer you could buy was the Cray X-MP (left), while the best selling computer was the Commodore 64 (right). The Cray cost $15 million, while the Commodore cost about $300. But the Cray was big enough to sit on, so it doubled as a couch, which was nice.

This might not be correct, but I was once told that when the Last Starfighter was being made, there were only two Cray X-MPs in the entire world.  One was used by NORAD to run the USA’s missile defense system, the other was being used for 3D animation.  Even if it’s not true, it’s a good story and emphasizes just how demanding 3D animation is.

Jump forward to about ten years after “The Last Starfighter” was rendering on a Cray, and there was still a huge gulf in price and performance between low-cost home computers and extremely expensive “minicomputers”. The Silicon Graphics “Onyx”, released in 1993, cost about $250,000 while their “Infinite Reality Engine”, released in 1996, cost over $600,000.

A Class divide

If you needed to ask how much one of these cost then you couldn’t afford it.

By the late 1980s a handful of people at SGI began to consider the potential for 3D hardware designed for home computers, but the issue wasn’t just raw performance.  The real problem was that the home computer market was a completely separate world to the high-end minicomputer market.  In 1993 a home computer might cost a few thousand dollars, while Silicon Graphics most powerful machines could cost more than half a million dollars.

SGI as a company wasn’t interested in the home computer market, and simply wasn’t run in a way that could make low-cost products.  Their cheapest machines still cost tens of thousands of dollars.  However they were aware that one way to expand their business would be to break into the consumer electronics market, which would result in a much higher volume of sales.  The goal was not to create a low-cost computer for sale to the public, but to have their own unique CPUs – MIPS chips – used by other companies.

At the start of the 1990s SGI began developing a lower-cost CPU, which they felt would be ideally suited to a home games console.  SGI had no intention of designing and selling a console by themselves; they just wanted to sell CPUs to someone who did. After meeting with various companies that had experience with game consoles, SGI partnered with Nintendo.  Many years later the result would be the Nintendo 64.

While the Nintendo 64 eventually proved to be a success, SGI was a company that simply wasn’t suited to selling low cost products directly to consumers.  SGI designed the CPU and GPU used in the Nintendo 64, but it was Nintendo that designed the console and bought it to market several years later. In order to write games for the Nintendo 64, developers had to purchase development kits from SGI which cost anything from $100,000 to $250,000.

From SGI’s perspective, the Nintendo 64 was a success on several fronts – firstly, they had achieved volume sales of their MIPS chips, secondly every developer that wanted to write games for it had to buy one of their $100,000 developer kits.  Finally, the Nintendo 64 marketing campaign had enhanced the profile and reputation of the Silicon Graphics brand.  It was a win-win-win situation.

While SGI had started work on a low-cost CPU many years before the Nintendo 64 finally shipped as a product, the project reinforced the company’s focus on the high-end market, with no attention given to low-cost home computers.  After all, why try to sell a computer for $1,000 when you had developers queuing up to buy them for hundreds of thousands?

Manufacturing down

There were a number of experienced people working at Silicon Graphics who felt differently. They were passionate about 3D graphics and believed there was a market for cost effective solutions on desktop PCs.

By 1992 Scott Sellers, Ross Q. Smith, and Gary Tarolli had left SGI to found a startup called Pellucid, with the intention of bringing 3D graphics to the PC market.  Pellucid proved to be valuable learning ground, without being a great success.  Pellucid did introduce the first high-resolution, accelerated frame buffer for the PC. Called the “ProGraphics 1024”, it offered a resolution of 1280 x 1024 with 8 bit colour for about $1,000.  At this price point, the high-cost of memory was a significant problem.

More valuable lessons came from their former employers at SGI.  Silicon Graphics had adapted the graphics boards for their Iris range of minicomputers to work in an IBM PC.  This had initially been done as an internal debugging tool, before they saw the potential for it to be a commercial product.  However – as mentioned earlier – Silicon Graphics simply had no experience in the low-end consumer market, and didn’t know what to do with it. Pellucid stepped in and licensed the technology from SGI, selling them as “IrisVision” 3D cards for about $4,000 each. The IrisVision board was possibly the first 3D acceleration card for the PC market – if it wasn’t the first, it was one of the first.

While Pellucid manufactured 2,000 IrisVision boards and sold every one, they learned the importance of having software support for custom hardware. The IrisVision board was only supported by 3 apps, including a Flight Simulator.

Pellucid had found limited success with their foray into PC graphics, and this was enough for them to be taken over by the industry giant “Media Vision”. Media Vision were a huge and phenomenally successful company that manufactured “multimedia” kits for PCs – bundling graphics cards, sound cards and CD-Rom drives together.  The guys from Pellucid proposed they develop a 3D chip designed for games on the PC, and initially Media Vision seemed like a great development environment.  There was one small problem though, and that was that Media Vision was run by criminals who were “cooking the books”. Almost overnight, Media Vision crumbled and the three former SGI employees were looking for new jobs.

Gordon Campbell was an engineer who’d worked at Intel for many years, before leaving to start his own microprocessor design company.  At some point in 1994, he found himself interviewing Ross Smith, but describes it as “the worst interview I think I’ve ever had”. Campbell could tell that Smith wasn’t really interested in the job that was going, so he asked him what he really wanted to do. Smith told him that he and two colleagues wanted to start a 3D graphics company. Campbell was on board, and set up a few more meetings.  Together, the four of them – three former SGI employees and one from Intel – started a new company and called it 3dfx. But the next step was to decide what to do.

Hardware is hard

Although the guys had left SGI because they believed there was a market for 3D with low cost desktop PCs, their experiences at Pellucid had made them aware of the technical hurdles they faced.

On the hardware side, the major issue was the variety of different, competing hardware standards amongs desktop PCs.  Unlike today, where all expansion cards are some form of PCI, in the 80s and early 1990s there were several competing bus standards.  Earlier computers had used an ISA bus to accommodate expansion cards, but it was a fairly primitive and unfriendly solution.  In 1987 IBM launched their own, updated expansion bus called MCA – however it was a closed, proprietary standard. This prompted other manufacturers to launch their own alternatives to ISA, including EISA and VESA.

The reality of this meant that by the early 1990s, there were at least four competing hardware busses, and this was a major headache for hardware developers – especially motherboard manufacturers. The expansion card market faced fragmentation, and increased the risk and development cost for manufacturers.

At the same time, memory was still very expensive, and placed a limit on how inexpensive a video card could be. This was a significant problem when trying to manufacture a volume, low-cost product for a mass market.

Finally, there was the software. Or, more correctly, there was no software. In 1994 no-one was writing 3D graphics software for PCs.  Without software, there was no market for a 3D graphics card.

But even developing software in 1994 was problematic.  Microsoft and Intel were aggressively pushing developers towards the upcoming “Windows 95” operating system, while most games were being written for the older DOS. This was so long ago that younger readers might be surprised to learn there was a time when not all PCs actually ran Windows.  Windows 95 – being launched in 1995 – was basically the launch of a new operating system. It was a truly momentous effort to convince everyone using Microsoft DOS – most of the business world – to migrate to a completely new product.

DOS worked, DOS was safe.  DOS was what everyone was used to. Sure, Microsoft made it clear they wanted everyone to move to Windows 95, but there was no guarantee that people would.  Developers faced the risk of spending time and money writing games and apps for Windows 95, only to discover that no-one was using it.

1994 was a very risky year to enter to desktop PC market with a new hardware product.


But there was another industry that was doing great.

I spent more time and money than I should have playing Daytona in pubs.

Outside of home PCs, arcade games were still a huge business, generating higher annual revenues than Hollywood movies.  One year earlier, Sega had released “Daytona” and “Virtua Fighter”, both of which had groundbreaking 3D graphics and both were hugely successful.  Other arcade developers were looking for solutions to produce 3D graphics at the same quality as Sega.

The 3dfx team visited an arcade game trade show and immediately saw a potential market. Compared to the work they’d done at SGI, the current state of 3D graphics in arcade games was amateurish.  Many of the arcade games, including all of the games made by Midway and Atari, used MIPS processors – familiar territory for former SGI employees.  They decided they would design an add-on expansion card for the arcade game market, which would render 3D graphics.

Now that they had a plan, they found venture capital funding and set about designing their first chip.  They named it “Voodoo”.  Initially, they found success as arcade companies including Atari, Midway, Capcom, Konami and Taito released games that used Voodoo chips for 3D graphics.

But while 1994 was the year 3dfx was founded, it was also the year that Sony released the Playstation 1.  The first Playstation would go on to sell over 100 million units. This was followed in 1996 by the Nintendo 64, which sold over 30 million units. In only a few years these hugely popular home games consoles decimated the arcade game industry.  Once again, the public had shown an appetite for cheap, colourful graphics – and games.

Ironically, it was the work they’d done on low-cost 3D graphics that prompted the team to leave Silicon Graphics.  Now, the product they’d help design – the Nintendo 64 – was contributing to the demise of their own customers.  3dfx needed a new market.

Doom but not gloom

Up until 1992, home computer games were largely 2D – including genres such as platform games, horizontal scrolling and shooting games, and sprite-based fighting games.  But in 1992 the course of computer games was changed forever when a small company called Id released a shareware game called “Wolfenstein 3D”, and introduced the world to the “first person shooter”.  Technically, Wolfenstein wasn’t “real” 3D but just very clever 2D transformations, but no one playing the game cared about how the graphics were being rendered.  They just wanted to shoot Nazis.  Wolfenstein was the first game in a brand new genre.

The global response to Wolfenstein 3D was phenomenal.  Following Wolfenstein’s success, Id’s main programming guru – John Carmack – developed a more advanced graphics engine.  It was designed for typical home computers with CPU clock speeds of about 25 mhz and without a FPU.  While the rendering engine was more sophisticated than the one in Wolfenstein, it was still not quite true 3D.  Id used the new rendering engine in their next game “Doom”, released in 1993.  To quote Wikipedia:

It is considered one of the most significant and influential titles in video game history, for having helped to pioneer the now-ubiquitous first-person shooter. 

Id had changed the course of computer gaming forever, and placed the immediate focus for game developers on first person shooters with 3D graphics.

A year after “Doom” hit home PCs, Sony released the first “Playstation” console, which included hardware support for rendering 3D polygons.  The Playstation’s hardware was limited, but it demonstrated a first step in supporting 3D graphics with low cost custom processors. A recent Ars Technica video on the development of “Crash Bandicoot” sheds some light on the primitive technology, and the challenges facing game developers working with hardware that sold for $400.

Pivoting to PCs

It had only been a few years since 3dfx turned their attention to arcade games, but with the demise of the industry they were looking for a new product.

The popularity of Wolfenstein and Doom on home PCs reinforced their belief that a market existed for 3D graphics on home computers.  The Playstation had been hugely successful, even though its hardware support for 3D graphics was very crude.  Games like Crash Bandicoot and Tomb Raider had become smash hits, despite their technical limitations.

Luckily, those few years had seen some fundamental changes in home computers.  The successful launch of Windows 95 had prompted millions of users to buy new computers with a Pentium processor.  New Pentium systems had two features that were directly relevant to 3D games – the Pentium was the first processor to have a built-in FPU across all models, and new Pentium systems used a PCI bus as standard.  Hardware developers no longer needed to choose one of several competing busses to support, as PCI became the industry norm– with Apple also changing to PCI busses in 1997.

Finally, in 1996 the cost of RAM dropped dramatically – and with it the final barrier to developing a 3D graphics card for the home PC.  3dfx were able to shift their focus from arcade games to the companies developing games for home PCs.

With the hardware obstacles overcome, they faced their next hurdle – software development.  Having learnt from their experience with Pellucid, they knew that if games developers weren’t interested in supporting their 3D hardware, their product wouldn’t have a market.

When Silicon Graphics released Open GL in 1991, high-end software and hardware developers had an open standard for 3D graphics, but Open GL didn’t make an impact on home computers.  It wasn’t expected to.  As mentioned earlier, the primary focus of 3D development during the 1980s was engineering, CAD and scientific visualization.  These industries had strict requirements relating to the accuracy and precision of 3D renders, and meeting these technical demands was a higher priority than speed.  Open GL had successfully met the goal of creating a unified standard for software developers, but it hadn’t been designed for fast rendering.

Conversely, the Playstation had proved to be a huge success, despite its primitive 3D rendering.  It was clear that people playing games cared more about the responsiveness and rendering speed of the system, than quality issues such as polygon counts, aliasing, lighting & texturing and so on.  After considering what they thought the general public wanted, and weighing that up against the cost of hardware development, 3dfx decided that Open GL wasn’t the right solution for games on home PCs.  However there wasn’t any other alternative, so the team decided to create their own.

3dfx started with the Open GL API – which they all knew well from their time with Silicon Graphics.  Then they stripped it down and optimized it to focus on rendering speed.  The goal was to create a new 3D language that was designed for games.  They called their new language “Glide” – even spelling it “GLide” to emphasise the link to Open GL – and it had a number of features that made it especially easy for programmers and games developers to use.  The process of developing GLide drew heavily on the team’s earlier experience at SGI, developing the lower-cost chipset that found its way into the Nintendo 64.


3dfx weren’t the only company looking to bring cheap 3D rendering to PCs. Their main competition was a company called Rendition, who’d launched a chip called the V1000 in 1995. The V1000 was used in a number of video cards that launched in 1995 and 1996, introducing the concept of 3D acceleration to the desktop PC market.  In the same way that 3dfx had developed their own proprietary 3D language called GLide, Rendition also had their own language – called RRedline. Like 3dfx, Rendition had been selling the concept of 3D acceleration to the games industry, and a number of games had been released that benefited from a Rendition card.

Quaking like Quaker oats in an earthquake

In only a few years, id had changed the course of computer games forever, and following the phenomenal success of Doom there was a huge amount of anticipation and hype around their next game – Quake.  Unlike Wolfenstein and Doom, the rendering engine John Carmack developed for Quake was based on true 3D geometry – meaning it was a perfect candidate for 3D hardware acceleration.

3dfx were pitching their GLide platform to games companies before they’d manufactured any chips, and were actually running presentations using simulations on a $250,000 SGI workstation.  While designing the chips, they discovered that even with all the optimizations they’d made with their new GLide language, the chip would require so many transistors for processing 3D graphics that there wouldn’t be enough room to support 2D graphics as well.  This meant their first chip for home PCs would have to be purely 3D.  Although this was a radical concept, it did make marketing the product simpler – it had one purpose and one purpose only: accelerate 3D graphics for games.  It also removed the burden of 2D graphics compatibility with Windows 95, which was a welcome relief for the 3dfx team.  This was a very different approach than the competing Rendition graphics card, which was a “normal” 2D graphics card which also offered 3D acceleration.

John Carmack, arguably the hottest games developer on the planet, initially seemed reluctant to adopt hardware acceleration.  The original Quake engine was a purely software renderer.  But he had forged strong relations with the developer community and one of the 3dfx engineers was able to demonstrate to Carmack how Quake would perform with hardware acceleration.  Carmack joined a board of advisors for 3dfx, although he was not an employee.

Carmack had also agreed to assist with a version of Quake that was accelerated for Rendition’s “Verite” 3D graphics card.  Two Verite’ developers worked on the port with direct support from id software.  The result, called “vQuake”, was released in time for Christmas 1996 and provided a welcome seasonal boost in sales. It was the first 3D accelerated version of Quake, however Carmack had found it very frustrating to use Rendition’s proprietary graphics API.  The bad experience prompted his decision to avoid proprietary APIs in the future, and to only support open standards. This meant no GLide version of Quake, even though Carmack was working closely with 3dfx.

Carmack had famously used a NeXT workstation for development, but had been looking around for newer alternatives.  While testing out an SGI workstation he ported a simple level renderer to OpenGL, followed soon after by the full game. In an extensive blog post from December 1996, Carmack clarified his views on accelerated 3D games, including the following notes:

I have been using OpenGL for about six months now, and I have been very impressed by the design of the API, and especially its ease of use. A month ago, I ported quake to OpenGL. It was an extremely pleasant experience…

…I am hoping that the vendors shipping second generation cards in the coming year can be convinced to support OpenGL. If this doesn’t happen early on and there are capable cards that glquake does not run on, then I apologize, but I am taking a little stand in my little corner of the world with the hope of having some small influence on things that are going to effect us for many years to come.

John Carmack, December 1996

Despite the fact that there wasn’t a single video card available for PCs that supported OpenGL, Id software released the OpenGL version of Quake in January 1997, two months after vQuake had been released for the Rendition cards.

This wasn’t actually bad news for 3dfx.  Because of the way the Voodoo chip had been designed, and thanks to the team’s OpenGL heritage, 3dfx were able to quickly develop a software patch that enabled QuakeGL to run on their Voodoo cards, even though QuakeGL didn’t use the GLide api. They’d been beaten to the market by Rendition, but once Voodoo cards started shipping there was no doubt about which card offered better performance.

Less than a month after QuakeGL was launched, the best way to play Quake was with a Voodoo card. The performance gap between QuakeGL on a Voodoo card, and vQuake on a Rendition card wasn’t small, either. While established graphics cards companies had begun to add 3D features to their 2D video cards, they weren’t as powerful as the dedicated 3D processors on the Voodoo cards.  Anecdotally, it seemed that gamers liked the idea that a Voodoo card’s sole purpose was to accelerate 3D games, and their lack of 2D support actually enhanced their reputation.

Needless to say, Quake was a worldwide smash hit and Voodoo cards flew off the shelves.  For a few brief years, Voodoo cards dominated the 3D gaming industry, with each new version bringing more powerful hardware and better gaming performance.

Directive 3

Around the same time, Microsoft had launched Windows 95 – a hugely significant product intended to replace the aging DOS.  But Microsoft faced a problem – gaming developers initially found Windows 95 much more difficult to work with.  Microsoft faced some early and very embarrassing compatibility problems with games made for Windows 95, and they were desperate to forge ties with the gaming industry.  While it might be hard to imagine now, there was no guarantee for Microsoft that Windows 95 would be a success.  Microsoft knew that they had to get game developers on board, because no one would upgrade to Windows 95 if all the latest games only ran on DOS.  They needed to make it easier for game developers to write games for Windows 95, and they needed to guarantee compatibility with a wide range of PC hardware.

Microsoft’s solution was a set of new Windows APIs – programming functions broadly comparable to a language – specifically tailored to the needs of gaming developers, all using the prefix “Direct”.  This included Direct3D, the Windows equivalent to Open GL.  With Microsoft throwing their full support behind Direct3D (and the other Direct family of APIs)  the entire Windows community now had easy access to 3D graphics. As Direct3D gained traction in the marketplace, graphics card manufacturers began including hardware to accelerate Direct3D rendering, as well as OpenGL.

At the start of the 1990s everyday graphics cards in desktop computers had taken the lead from the Amiga’s blitter feature, and begun to offer 2D acceleration.  This helped speed up things like dragging windows around the screen.  But now graphics cards had a new purpose – to accelerate 3D games.  Iconic game franchises such as Quake and Unreal introduced the world to the concept of a “3D graphics card”, and other manufacturers quickly entered the market.  It was computer games, not CAD, engineering or high-end visual fx that drove the development of 3D graphics cards on home computers.

3dfx had helped to introduce 3D graphics hardware to home computers, but their GLide language was proprietary, and initially they didn’t license it to other hardware companies.  With no significant competition aside from Rendition, game developers initially added support for GLide and Voodoo cards in a mutually-beneficial relationship.  However with Microsoft aggressively developing Direct3D, and other hardware manufacturers free to develop Open GL support without licensing it (the Open in Open GL reflecting the fact it was an Open standard) the GLide API quickly dropped in popularity, and with it the sales of Voodoo cards. Almost as quickly as they arrived on the scene and dominated the 3D gaming industry, 3dfx fell from favor and filed for bankruptcy.

3dfx was purchased by another developer of graphics cards, nVidia.  nVidia had been competing with 3dfx for a few years, and their Riva TNT card in 1998 had been noticeably popular. In 1999, nVidia released their newest card – called the GeForce 256.

The GeForce 256 was a relatively expensive card for the time, and it included a number of new features that nVidia claimed were revolutionary – including hardware accelerated texture and lighting.  However games needed to be specifically written to take advantage of these features, and so older games showed no noticeable improvements with the new card.  In some cases older Voodoo cards could outperform the Geforce 256 even on newer games, so the response from the gaming community was a little muted.

But the GeForce 256 has etched its place in history for two reasons, and it remains a significant milestone in the development of 3D accelerated graphics.

The first reason is the introduction of hardware accelerated texture and lighting. While this might have been underwhelming in 1999 due to the lack of support, the continued development of hardware T&L lead directly to the revolutionary GeForce 8, released in 2006.

The second reason is simple marketing.  While some tech journalists didn’t see the immediate benefits of hardware T&L, nVidia were aggressively promoting their new technology.  The GeForce 256 wasn’t just a simple graphics card, it was the world’s first “Graphics Processing Unit”.

nVidia had launched the first GPU.


This has been Part 10 in a series on the history of After Effects and performance.  Have you missed the others?  They’re really good.  You can catch up on them here:

Part 1, Part 2, Part 3, Part 4 , Part 5, Part 6, Part 7Part 8. & Part 9.

I’ve been writing After Effects articles for the ProVideo Coalition for over 10 years, and some of them are really long too. If you liked this one then check out the others