From Green Text to Ray Tracing: Where Did Graphics Card Come From?

Wondering how we went from chunky pixels to photorealistic gaming? Let’s trace the surprising history of graphics card evolution – the technical milestones, the market shakeups, and the real reasons your GPU looks the way it does today.

If you are a gamer, then I am sure you know how important a graphics card is for your gaming rig. The graphics processing unit (GPU) is one of the most vital components of computer architecture, especially for gaming, video rendering, and general computing. But where did graphics cards come from? 

From basic 2D graphics to today’s stunning, lifelike 3D environments, the evolution of graphics cards has been truly phenomenal. So let us take a look at the historical development of graphics cards and find out what year did graphics cards come out. 

Where Did Graphics Card Come From? 

When a number of computer engineers got bored and realized computers need to be less boring than watching paint dry, they decided to make computer displays a bit more thrilling than just green text on black screens. In 1981, IBM developed and released the first-ever video cards:

  • MDA (Monochrome Display Adapter)
  • CGA (Color Graphics Adapter)

While the MDA featured 4 KB of video memory, the CGA was equipped with 16 KB of video memory. With a single monochrome text mode, the MDA card helped in displaying high-resolution text and symbols at 80×25 characters, making it ideal for forms. But it did not include any graphics support.

The CGA introduced 320×200 pixel resolution with a palette of up to 16 colors and replaced monochrome displays in personal computers with color displays. The Color Graphics Adapter (CGA) helped introduce visually pleasing applications and games. 

In 1982, Number Nine Visual Technology Corporation launched the #9 GXE, a line of graphics cards designed to support high resolutions and color depths in Windows environments. While it was a far cry from today’s advanced GPUs, the #9 GXE further laid down the foundation for developing commercial graphics cards. 

Then, in 1984, the graphics card revolution took another step forward with the Enhanced Graphics Adapter (EGA). It offered a resolution of 640×350 pixels and supported 16 colors selected from a palette of 64. Although these cards had a limited color palette and supported low resolutions, they began the GPU revolution and helped launch the graphics card industry.

So if anyone ever asks you, “what year did graphics cards come out?”, tell them the year was 1981. BUT, this is where the story gets messy. So let’s rewind a bit and start from the top.

Evolution of Graphics Cards

Modern GPUs have become the cornerstone for high-quality visuals and rendering techniques. Graphics cards have completely transformed the dull computer screen into a vibrant visual spectacle. But how did it all begin? Where did graphics card come from? What year did graphics cards come out for real? Let’s take a detailed look at how graphics cards evolved.

Pre-1980: The Stone Age 

As we have already established, stone age computers had no graphics at all – just black screens with green text. Now we’re talking about the 1970s here. The PDP-11, a minicomputer that displayed text on a terminal, was gaining popularity during this time. 

The Xerox Alto, developed at Xerox PARC (Palo Alto Research Center) and released in 1973, was a pioneering PC that featured a bitmap display for showing real graphics. This machine introduced a number of features that feel very common to us, such as the graphical user interface (GUI), the use of a mouse, and the desktop metaphor. Although it was not released commercially, it inspired the development of Microsoft Windows and the Apple Macintosh. 

Even though vector graphics systems did exist during the time, they were mostly confined to research labs as they were highly expensive. Evans & Sutherland (E&S), an American computer graphics firm, built graphics workstations for their clients NASA and Boeing. While NASA worked with the E&S PS300 family of graphics workstations (1980s), Boeing used the DEC-VAX 11/780 computer (1977) and an E&S Multi-Picture System. These systems were able to display raster images, wire-frame images, 3D geometry modeling, and real-time animations. However, these computers required dedicated cooling systems, which was a serious challenge at the time. 

Moreover, arcade games were exploring new areas of graphics technology, which was evident in games like Pong (1972) and Asteroids (1979). While these were not traditional computers, these machines pushed the graphics revolution forward. 

Early 1980s: The Prehistoric Era

In this era, the game starts with the launch of IBM’s Monochrome Display Adapter (MDA). While it could display 80 columns of green text on a black screen, there was no scope for graphics as we understand it now. The MDA was followed by the Color Graphics Adapter (CGA), which supported four basic colors – black, white, magenta, and cyan. No, these were not graphics cards, but simple display controllers that took character data from memory and showed it on screens. But this was still progress.

In 1982, Hercules Computer Technology released the Hercules Graphics Card (HGC), which used IBM’s MDA display standard with a bitmapped graphics mode. The HGC featured 720×348 monochrome graphics, which was higher than anything offered by IBM. More importantly, Hercules cards became popular with software developers. 

In 1983, Intel introduced the iSBX 275 Video Graphics Controller Multimodule Board, which could display up to eight unique colors at 256×256 resolution. One year later, IBM launched the PGC (Professional Graphics Controller) and the EGA (Enhanced Graphics Adapter). EGA cards were priced around $1,000, which was a LOT of money in 1984. But for the first time, PCs could actually run graphics software that looked professional. It made desktop publishing possible, and games finally started looking better. 

During the late 1980s, NEC Corporation, a Japanese information technology and electronics company, introduced the PC-9800 series in the Japanese market. It featured enhanced graphics capabilities with 640×400 resolution in 16 colors, allowing Japanese game developers to create visually stunning games.

Later in 1987, IBM once again took the computer graphics landscape another step forward by introducing the Video Graphics Array (VGA), with a 640×480 resolution, 256 colors from a palette of 262 colors, and a video memory of up to 256 KB. VGA quickly became the standard for computer graphics and videos for several years. In 1988, the ATI VGA Wonder was launched by ATI Technologies, a Canadian technology corporation. The 2D, 16-bit VGA Wonder featured a mouse port directly configured into the card.

All these companies did the groundwork during this era, which led to the development of many more innovative video and graphics cards in the coming years. 

So while we have some idea about “where did graphics card come from?”, the answer to “what year did graphics cards come out?” would primarily depend on how you define a graphics card, as the technology and the market have evolved through decades. And we are not even half done yet. 

Late 1980s to Early 1990s: The SVGA Wars

SVGA, or Super Video Graphics Array helped to enhance display standards compared to VGA (Video Graphics Array). The SVGA featured a resolution of 800×600 pixels and potentially supported up to 16 million colors, depending on the video memory. As a result, it increased resolution and offered sharper, more detailed images, leading to better visual quality, particularly in gaming. SVGA was a clear step forward from VGA, and this is why every manufacturer defined their own high-resolution modes, including 800×600, 1024×768, 1280×1024, causing more chaos. 

If you wanted to buy a graphics card, then you needed to research compatibility with certain games and applications. Some cards even came with expandable memory slots, allowing you to upgrade by adding more chips.

1992-1997: The 3D Graphics Revolution

The 1990s witnessed the rise of a revolution in graphics cards. However, early graphics cards simply served as frame buffers, compelling the CPU to do all the heavy lifting while graphics cards just displayed results. As software became more complex and demanding, this led to serious bottlenecks.

The Vision864 chipset, developed by the American computer graphics company S3 Graphics, Ltd., and launched in 1992, changed the game. This included hardware acceleration for line drawing, polygon fills, and delivered text rendering with 5x performance improvements over software-only solutions. This was followed by Cirrus Logic’s GD5426 and GD5428 chipsets, which had Windows GUI acceleration, making desktop operations a lot smoother.

This is when ATI introduced their Mach32 chipset, targeting CAD professionals with 2D acceleration and support for 1280×1024 resolutions. ATI aimed to position its brand as the performance leader for CAD and professional applications. And just when things seemed to settle down, Windows 95 arrived and completely changed the landscape. Now, every user suddenly wanted to grab a graphics card for smoother GUI operations. As driver quality became crucial, graphics card manufacturers struggled to optimize their products for Windows. 

This was the era when graphics card reviews started gaining popularity among enthusiasts, as publications tested performance and applications like Photoshop, CorelDRAW, and 3D modeling software. The graphics card market matured rapidly during this time.

But as 1996 rolled in, American video game developer id Software LLC launched Quake, while the American computer hardware company 3dfx Interactive, Inc., released the Voodoo Graphics card. And so, gaming was never the same ever again. Quake’s software renderer looked decent, but GLQuake running on a Voodoo card was pure magic. It had bilinear filtering, perspective-correct texture mapping, and smoother frame rates, which made games appear a lot more photorealistic.

As the original Voodoo card only supported 3D graphics, you needed to purchase a separate 2D card for Windows. But the improvement in visual quality and experience made the cost justified, especially for gamers. 

1997-1999: The Golden Age of 3dfx

3dfx Interactive dominated the graphics card market in the late 1990s with a number of innovative products. In 1998, 3dfx Interactive introduced the Voodoo2, which focused specifically on 3D acceleration. Moreover, you could link two Voodoo2 cards using Scan Line Interleaving (SLI) technology to render alternate lines of the screen with each card. This helped to further boost performance.

While using two Voodoo2 cards meant shelling out a lot of cash, somewhere around $600 total, it could easily run games like Unreal and Half-Life at high resolutions and smooth frame rates. 3dfx cards became synonymous with PC gaming due to their unmatched performance. 

But just when the company seemed unstoppable, ATI and NVIDIA, which were founded in 1993, geared up to give stiff competition to the brand. 

1999-2000: The era of NVIDIA 

The early offerings from NVIDIA, such as the NV1 and RIVA 128, could not compete with 3dfx’s gaming-focused products. However, the tables turned in favor of NVIDIA in 1998 when the RIVA TNT was released. It offered 2D and 3D capabilities on a single chip and offered a solid performance that could easily compete with the Voodoo2. As users were getting tired of dual-card setups, NVIDIA’s single-card solution instantly became more appealing. 

TNT2 from NVIDIA performed even better and added features like 32-bit color depth and higher resolution support. In no time, NVIDIA became a brand synonymous with superior image quality. During this period, ATI focused on 2D graphics and professional applications to stay relevant in the market. ATI’s Rage series featured basic 3D capabilities but failed to compete with dedicated 3D accelerators.

And with the rise of NVIDIA’s single card with 2D and 3D capabilities, 3dfx Interactive kept falling behind. In 1999, the Voodoo3 series was launched with solid technical performance but lacked significant advancements. In contrast, NVIDIA’s GeForce 256 completely revolutionized the GPU market with unique features such as hardware transform and lighting. 3dfx’s Voodoo4 and Voodoo5 series failed to leave a mark on the market and became their final products.

During this period, 3dfx made a number of strategic mistakes that deteriorated its financial situation. With limited funds, 3dfx lacked the resources to invest in research and development, which ultimately killed the organization. Finally, NVIDIA acquired 3dfx Interactive in 2000 and gained access to 3dfx’s intellectual property, patents, trademarks, SLI technology, and talented engineers 

1999-2001: The GeForce Revolution

So, what year did graphics cards come out? This is where things finally start getting interesting, and we start getting answers. In 1999, NVIDIA launched the GeForce 256 with 32-64 MB of memory and a 128-bit memory bus. It was marketed as “the world’s first ‘GPU’, or Graphics Processing Unit”. This was the first commercial graphics card that featured dedicated hardware for transforming and lighting calculations, functions that previously required CPU processing power.

While ATI released Radeon in 2000 with a Radeon DDR graphics processor, 32-64 MB of memory, and a 128-bit memory bus, GeForce 256 proved to be faster and more advanced, especially for gaming performance. NVIDIA’s first-ever GPU could support games with complex geometry and lighting effects, while keeping frame rates smooth. The GeForce2 series, launched in 2000, improved performance while maintaining compatibility with existing software. 

In fact, NVIDIA introduced different variants for different market segments: 

  • GeForce2 MX for budget systems
  • GeForce2 GTS for mainstream gaming
  • GeForce2 Ultra for enthusiasts

While ATI had largely stayed out of 3D gaming, the Radeon 7200 provided strong competition to NVIDIA’s GeForce2, featuring better video processing abilities. Moreover, the Radeon 8500 series video card challenged GeForce3 directly, and with advanced anti-aliasing and anisotropic filtering, it even provided better performance per dollar.

2001-2010: The Shader Wars

As the rivalry between NVIDIA and ATI reached insane new levels, it led to remarkable innovation cycles. And as a result, these companies launched new graphics cards every six months, with each new version being better than the previous ones. 

In 2001, DirectX 8.0 introduced programmable shaders – small programs running on graphics hardware to create custom visual effects. NVIDIA’s GeForce3 was the first to support vertex and pixel shaders, and this completely transformed game graphics. It allowed developers to finally create complex lighting models, realistic water effects, and add realistic textures to wood, metal, or fabric. ATI released the Radeon 9700 Pro in 2002 with improved shader performance, making it the most popular choice for enthusiasts. This established ATI as a serious competitor to NVIDIA.

It was during this era that active cooling became a major requirement for graphics cards. While earlier models used passive heatsinks, newer GPUs consumed more power and hence demanded fans. So high-end cards were manufactured with dual-fan coolers that resembled miniature jet engines. DirectX 9 expanded shader capabilities dramatically. GPUs could now handle complex programs, creating realistic visual effects. Games like Far Cry and Half-Life 2 had incredible visual fidelity that was never seen in older games. 

However, as the rivalry between NVIDIA and ATI intensified, each brand launched new models with enhanced performance almost every year. While the new cards performed like never before, power consumption and heat generation increased proportionately. As newer models became more powerful, the prices started soaring, so the manufacturers targeted specific market segments with different price ranges. 

While budget cards offered basic DirectX 9 compatibility, mainstream cards offered excellent performance at popular resolutions. Enthusiast cards, on the other hand, pushed boundaries regardless of power consumption, heat generation, and cost.

Launched in 2006, NVIDIA’s GeForce 8800 series introduced the unified shader architecture – a design that is still in use to this day. Instead of separate vertex and pixel processors, unified shaders could handle any graphics calculation type. This means the cards could now allow dynamic processing power allocation depending on the specific needs of games.

As the GeForce 8800 GTX proved to be a powerhouse of performance, ATI responded with its Radeon HD series, which also utilized the unified shader architecture. The good thing about this cut-throat competition was that it motivated performance improvements and drove the expansion of graphics capabilities beyond gaming, and into general computing. ATI Technologies was acquired by AMD in 2006. Now that we are past the point of asking “what year did graphics cards come out?”, let’s now focus on the introduction of CUDA.

2007-2009: The rise of CUDA

In 2007, CUDA (Compute Unified Device Architecture) was introduced by NVIDIA as a parallel computing platform and application programming interface (API). It allowed software to use powerful GPUs to speed up complex tasks beyond just graphics and expand into scientific and high-performance applications. CUDA made scientific simulations, password cracking, cryptocurrency mining, and video encoding possible with commercial graphics cards. AMD (formerly ATI) also launched similar technologies with their Stream processors and later OpenCL support. 

Now, the high-end GPU market has attracted users beyond gamers and has become useful for cryptocurrency miners, content creators, scientific researchers, and users who need massive parallel processing power.

Moreover, NVIDIA also released its GeForce 9 series in 2007 with the G92 architecture for optimum performance. AMD responded with the Radeon HD 3000 series. In 2008, NVIDIA’s GTX 200 became the fastest card of the time, but was countered by AMD’s Radeon HD 4000 series. The NVIDIA vs AMD (ATI) dance continued and became more intensified in 2009 when NVIDIA launched the GTX 295, while AMD released Radeon HD 4870 X2 and Radeon HD 5970, all with dual GPUs. Well, does that answer “where did graphics card come from? Ah, but wait, there’s more.

2010-2017: The Modern Era

This was truly the era of high-performance graphics cards. Apart from steady and improved performance, GPUs added more memory and became increasingly efficient. Moreover, high-end graphics cards made 4K gaming a reality. While multiple monitor setups became popular among users, the introduction of VR headsets demanded better performance from graphics cards. However, power consumption proved to be a serious challenge. High-end graphics cards now require 300+ watts and sophisticated cooling solutions. Some enthusiast-grade cards, high-performance GPUs designed for gamers and PC enthusiasts, also include liquid cooling systems.

In 2011, NVIDIA still dominated the GPU market with the GeForce 600 series, which was better and faster than previous models. AMD countered with the Radeon HD 6990 and the Radeon HD 7000 series, both of which were powerhouse models with solid performance in high-resolution gaming. In 2012, the GTX 690 was introduced by NVIDIA, which helped the brand maintain its lead in the market. AMD responded with the R9 200 series. Later in 2014, both brands introduced their new high-end models, with NVIDIA launching the GTX Titan Z, with 5760 stream processors and 12 GB of memory, while AMD released the R9 295X2, featuring 5632 stream processors and 8 GB of memory. Both GPUs were designed to perform demanding tasks and high-resolution gaming.

2018-Present: Ray Tracing Arrives

NVIDIA’s GeForce RTX 20 series introduced real-time ray tracing to mainstream graphics cards in 2018. The GPU series was led by the RTX 2080 Ti with 11GB GDDR6, 4352 CUDA cores, and a 352-bit memory bus. AMD had no ray tracing support back then. Ray tracing allowed games to implement advanced lighting techniques that were previously limited only to films. But early ray tracing implementations were costly. Ray tracing allowed multiple games, like Metro Exodus and Control, to offer a more visually stunning experience. However, FPS dropping became a real concern even on high-end hardware.

The RTX 3090 was released by NVIDIA in 2020, featuring 24GB GDDR6X, 10496 cores, and a 384-bit memory bus. As a response, AMD introduced the RX 6000 series with 80 compute units and 16GB GDDR6. The brand war continued in 2021 as NVIDIA expanded the RTX 30 series and AMD snapped back with the RX 6650 XT, 6750 XT, and 6950 XT.

The RTX 4070 Ti, 4080, and 4090 were introduced by NVIDIA in 2022, while AMD launched RDNA3-based RX 7900 XT and XTX. This was the year when Intel finally came back to the GPU landscape with the Arc A750 and A770, featuring up to 4096 FP32 units, 32 RT units, and GDDR6 across a 256-bit bus. By early 2025, NVIDIA rolled out the RTX 5080 and 5090, AMD countered with the RX 9060 XT to its RX 9000 series, while Intel launched the Arc B580.

NVIDIA’s RTX 30 and 40 series dramatically improved ray tracing performance in GPUs. Moreover, DLSS used AI to improve the quality of lower-resolution images, which helped ray tracing run smoothly at higher frame rates. Today, stiff competition between leading brands continues to drive innovation.

Takeaway

There’s no doubt that graphics cards have come a long way since their humble beginnings. Compared to earlier models, modern GPUs are supercomputers. Within a few decades, we have progressed from green text on a black screen to rendering photorealistic videos and graphics. From the first graphics adapters released in 1981 to NVIDIA’s GeForce RTX 50 series and AMD’s Radeon RX 9000 series, the GPU’s journey of technological progress has been nothing short of dramatically historic. Today’s games are primarily interactive movies. The immersive experiences created by VR technology were considered science fiction just decades ago. 

Today, graphics cards can handle everything from gaming and video rendering to AI training and cryptocurrency mining. So instead of asking “Where did graphics card come from?”, we should be asking where they’re going next. With AI integration, the evolution continues at breakneck speed. And judging by the current development pace, we’re probably still in the early chapters of this tale of technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.