Geforce 8800 gts specifications. Video cards. Game tests: Hitman: Blood Money

Update: we decided to supplement the initial review with additional theoretical information, comparative tables, as well as the test results of the American laboratory THG, where the "junior" GeForce 8800 GTS was also involved. In the updated article, you will also find quality tests.

GeForce 8800 GTX is head and shoulders above the competition.

You've probably heard about DirectX 10 and the miracles the new API promises over DX9. Screenshots of games that are still in development can be found online. But until now, there have been no video cards with DX10 support on the market. And nVidia was the first to fix that flaw. Welcome to the release of DirectX 10 graphics cards in the form of nVidia GeForce 8800 GTX and 8800 GTS!

A single unified architecture will be able to squeeze more out of shader units, since they can now be used more efficiently than with a fixed layout. A new era in computer graphics is opened by the GeForce 8800 GTX with 128 unified shader units and the GeForce 8800 GTS with 96 such units. The days of pixel pipelines are finally over. But let's take a closer look at the new cards.

80 graphics cores are shown on the substrate. The new GPU promises to deliver twice the performance of the GeForce 7900 GTX (G71). 681 million transistors result in a huge die area, but when asked about this, nVidia CEO Jen-Hsun Huang replied, “If my engineers said they could double the performance by doubling the die area, I would even did not doubt! "

Experience has shown that doubling the area does not double the performance at all, but NVIDIA seems to have struck the right balance between technological advancements and die implementation on silicon.

GeForce 8800 GTX and 8800 GTS fully comply with the DX10 and Shader Model 4.0 standards, various storage and data transfer standards, support geometry shaders and stream out. How did nVidia implement all of this?

To begin with, nVidia has moved away from the fixed design that the industry has been using for the past 20 years in favor of a unified shader core.


Earlier, we showed similar slides illustrating the trend of increasing the power of pixel shaders. Nvidia is well aware of this trend and is moving towards balancing computational needs by implementing unified shaders through which data streams pass. This gives maximum efficiency and productivity.

nVIDIA says: “The GeForce 8800 development team was well aware that high-end DirectX 10 3D games would require a lot of hardware power to compute shaders. While DirectX 10 stipulates a unified instruction set, the standard does not require a unified GPU shader design. But the GeForce 8800 engineers believed that that it is the unified GPU shader architecture that will effectively balance the load of DirectX 10 shaders, improving the architectural efficiency of the GPU and properly allocating the available power. "

GeForce 8800 GTX | 128 stream processors SIMD



The processor core runs at 575 MHz for the GeForce 8800 GTX and at 500 MHz for the GeForce 8800 GTS. If the rest of the core runs at 575 MHz (or 500 MHz), then the shader core uses its own clock. The GeForce 8800 GTX runs at 1350 GHz, while the 8800 GTS runs at 1200 GHz.

Each core shader element is called a Streaming Processor. The GeForce 8800 GTX uses 16 blocks of eight such elements. As a result, we get 128 stream processors. Similar to the design of the ATi R580 and R580 +, which have pixel shader units, nVidia plans to add and remove units in the future. Actually, this is what we observe with 96 stream processors in the GeForce 8800 GTS.



Click on the picture to enlarge.

GeForce 8800 GTX | specification comparison table

nvidia previously couldn't do full-screen anti-aliasing with HDR lighting at the same time, but that's history. Each raster operation unit (ROP) supports framebuffer mixing. Thus, with multisampling anti-aliasing, both FP16 and FP32 rendering targets can be used. With D3D10 acceleration of color and Z in ROPs, up to eight multiple rendering targets can be used, as well as new compression technologies.

The GeForce 8800 GTX can fill 64 textures per clock, while at 575 MHz we get 36.8 billion textures per second (GeForce 8800 GTS \u003d 32 billion / s). The GeForce 8800 GTX has 24 raster operations (ROPs) and when the card is running at 575 MHz, the peak pixel fill rate is 13.8 gigapixels / s. The GeForce 880GTS version has 20 ROPs and a peak fill rate of 10 gigapixels / s at 500 MHz.

NVidia GeForce Specifications
8800GTX 8800GTS 7950GX2 7900GTX 7800GTX 512 7800GTX
Process technology (nm) 90 90 90 90 110 110
Nucleus G80 G80 G71 G71 G70 G70
GPU number 1 1 2 1 1 1
Transistors per core (million) 681 681 278 278 302 302
Vertex frequency (MHz) 1350 1200 500 700 550 470
Core frequency (MHz) 575 500 500 650 550 430
Memory frequency (MHz) 900 600 600 800 850 600
Effective memory frequency (MHz) 1800 1200 1200 1600 1700 1200
Number of vertex blocks 128 96 16 8 8 8
Number of pixel blocks 128 96 48 24 24 24
ROP number 24 20 32 16 16 16
Memory bus width (bits) 384 320 256 256 256 256
GPU Memory (MB) 768 640 512 512 512 256
GPU Memory Bandwidth (GB / s) 86,4 48 38,4 51,2 54,4 38,4
Vertices / sec (million) 10 800 7200 2000 1400 1100 940
Pixel Throughput (ROP x Frequency, Bps) 13,8 10 16 10,4 8,8 6,88
Texture throughput (number of pixel pipelines x frequency, in billions / s) 36,8 32 24 15,6 13,2 10,32
RAMDAC (MHz) 400 400 400 400 400 400
Tire PCI Express PCI Express PCI Express PCI Express PCI Express PCI Express

Pay attention to the width of the memory bus. Looking at the diagram on the previous page, the GeForce 8800 GTX GPU uses six memory partitions. Each of them is equipped with a 64-bit memory interface, which gives a total of 384 bits wide. 768MB of GDDR3 memory is connected to the memory subsystem, which is built on a high-speed cross-connect, like the GPU GeForce 7x. This cross switch supports DDR1, DDR2, DDR3, GDDR3, and GDDR4 memory.

The GeForce 8800 GTX uses GDDR3 memory with a default frequency of 900 MHz (the GTS version runs at 800 MHz). With 384 bits (48 bytes) wide and 900 MHz (1800 MHz effective DDR) bandwidth is a whopping 86.4 GB / s. And 768 MB of memory allows you to store much more complex models and textures, with higher resolution and quality.

GeForce 8800 GTX | nVidia knocks out ATi


Click on the picture to enlarge.

We have good news and bad news. Good ones are faster than fast ones, they are very quiet and have so many interesting technical things for which there is not even software yet. The bad news is that they are not available for sale. Well, yes, there is always something wrong with the new hardware. Sparkle sells these cards for 635 euros. We are already beginning to get used to such prices for top-end hardware.

The board is 27 centimeters long, so you cannot install it in every case. If your computer has hard drives located right behind the PCIe slots, then installing a GeForce 8800 GTX is likely to be a tricky business. Of course, the disks can always be moved into the 5-inch bay through the adapter, but you must admit that there is little pleasant in the problem itself.


Click on the picture to enlarge.

The technical implementation is not something to laugh at. this is the best piece of hardware you can buy for your PC for the New Year. Why has the GeForce 8800 GTX received so much attention from the internet public? Elementary - it's about record performance. So, in Half-Life 2: Episode 1, the number of frames per second on the GeForce 8800 GTX at 1600x1200 is 60 percent higher than that of the top-end Radeon X1000 families (1900 XTX and X1950 XTX).

Oblivion runs incredibly smoothly at all levels. More precisely, with HDR rendering enabled in Oblivion, the speed is at least 30 fps. Titan Quest can't see less than 55 frames per second. Sometimes you wonder if the benchmark is hanging, or maybe something happened to the levels. Enabling full-screen anti-aliasing and anisotropic filtering does not affect the GeForce 8800 at all.

This is the fastest graphics card among all models released in 2006. Only the Radeon X1950 XTX in CrossFire paired mode catches up with the 8800 GTX in some places. So if you asked what card Gothic 3, Dark Messiah and Oblivion do not slow down, then here is the answer - before you GeForce 8800 GTX.

GeForce 8800 GTX | Two power sockets

Power is supplied to it through two slots on the top of the board. Both are necessary - if you remove the cable from the left one, the 3D performance will decrease dramatically. Want the neighbors to go crazy? Then take out the right one - an insane squeak that will begin to be heard from the board will envy your car alarm. The board itself will not turn on at all. Note that nVidia recommends using a power supply unit with a capacity of at least 450 watts with the GeForce 8800 GTX, and that the 12 volt line can be 30 amps.


On a GeForce 8800 GTX, both power sockets must be connected. Click on the picture to enlarge.

The two power sockets are easy to explain. According to PCI Express specifications, a single PCIe slot can consume no more than 75 watts of power. Our test unit consumes about 180 watts only in 2D mode. That's a whopping 40 watts more than the Radeon X1900 XTX or X1950 XTX. Well, in 3D mode, the board "eats" about 309 watts. The same Radeon X1900 / 1950 XTX in this case consume from 285 to 315 watts. It is not clear to us what needs the GeForce 8800 uses so much power when working in simple Windows.

Two more connectors are reserved for SLI mode. According to nVidia documentation, SLI only requires one plug. The second is not used yet. Theoretically, having two connectors, you can connect more than two in a multi-board system. The appearance of the second connector can also be linked to the progressive popularity of hardware physics calculations. Maybe another video card will be connected through it in order to calculate the physical functions in the game engine. Or maybe we are talking about Quad SLI on 4 boards, or something like that.


An additional slot is now reserved for SLI. But with the current version of the driver, you can only use one. Click on the picture to enlarge.

GeForce 8800 GTX | Quiet cooling system

The GeForce 8800 is equipped with a very quiet 80mm turbine cooler. Like the Radeon X1950 XTX, it is located at the very end of the board to push cool air across the entire surface of the GeForce 8800 and then out. At the end of the board, there is a special grille that releases hot air not only outward through the hole that occupies the second slot, but also downward, directly into the case. In general, the system is simple, but there are a number of controversial points.


Warm air is thrown out through the hole in the second slot, but some of it gets back into the case through the grille on the side of the GeForce 8800 GTX. Click on the picture to enlarge.

If the PCIe slots in your computer are close, and in SLI two boards fit in such a way that the gap between them is not too large, then the temperature in this place will be very decent. The lower card will be additionally heated by the upper one, through the same side grill on the cooling system. Well, it’s better not even think about what will happen if you install three cards. You will get an excellent household electric heater. In frosty weather you will work near an open window.

When the board is installed alone, the cooling system is impressive and fulfills one hundred percent. Like the GeForce 7900 GTX cards, it also works quietly. During the entire six-hour test run, with a constant high load, the board was never heard. Even if you fully load the board with work, the cooler at medium speed will cope with heat removal. If you bring your ear to the back of the computer, you will hear only a slight noise, a kind of quiet rustling.


The 80mm cooler is quiet and never runs at full capacity. The board's cooling system occupies two slots. Click on the picture to enlarge.

The dedicated ForceWare 96.94 driver that nVidia has prepared for the GeForce 8800 GTX does not output temperature monitoring data. Prior to this release, you could choose between the classic and the new interface, but the press release 96.94 contains only a new version of the settings panel. If you try to open the frequency and temperature settings, the driver will send you to the nVidia website so you can download the Ntune utility. It is in it that these functions are configured. Download the 30MB archive and install it. At the first start, we get a complete freeze of the computer and Windows.

If, after installing Ntune, select the frequency and temperature control in the settings panel, a special information page opens, where the motherboard settings are indicated. You cannot find any settings, that is, information about frequency and temperature. Therefore, we carried out temperature measurements in the classical way - using an infrared thermometer. When fully loaded, the measurements showed a temperature of 71 degrees Celsius, while working in 2D mode, the card was kept within the range from 52 to 54 degrees.

Hopefully, nVidia will release a standard version of ForceWare for the GeForce 8800. The classic configuration interface is sometimes more convenient, besides, it displays information about the temperature, and with the help of coolbits you can adjust the frequencies. The new driver, together with Ntune, occupies about 100 megabytes and is segmented into a considerable number of tabs and windows. It is not always convenient to work with him.


The GeForce 8800 GTX chip has as many as 681 million transistors, it is manufactured using 90 nanometer technology at the TSMC factory. Click on the picture to enlarge.

The G80 GeForce 8800 GTX has 681 million transistors. This is twice as much as in the Conroe core of Intel Core 2 Duo processors or in the GeForce 7 chip. The GPU of the video card runs at 575 MHz. The memory interface is 384-bit and serves 768 megabytes. For memory, nVidia used high-speed GDDR3, which runs at 900 MHz.

For comparison: the memory of the GeForce 7900 GTX runs at 800 MHz, and the GeForce 7950 GT at 700 MHz. Radeon X1950 XTX graphics cards use 1000 MHz GDDR4 memory. The GeForce 8800 GTS card has a core frequency of 500 MHz, a memory capacity of 640 MB with a frequency of 800 MHz.

Test results show that full-screen anti-aliasing and anisotropic filtering finally do not degrade performance when enabled. In resource-intensive games, like Oblivion, you used to have to keep track of this, but now you can turn everything on to the maximum. The performance of previous nVidias was such that these games ran smoothly only at resolutions up to 1024x768, while HDR rendering with pixel shaders version 3 took up a huge amount of resources. The graphics cards are so powerful that the inclusion of 4xAA and 8xAF without problems allows you to play at resolutions up to 1600x1200. The G80 chip supports maximum anti-aliasing settings of 16x and 16x anisotropic filtering.


The GeForce 8800 GTX supports 16x anti-aliasing and anisotropic filtering.

Compared to single ATi, the GeForce 8800 GTX has no competitors. New nVidia can now pull HDR rendering using third-party shaders and anti-aliasing. HDR rendering allows you to get extreme reflections and highlights, simulating the effect of dazzle when you step out of a dark room into bright light. Unfortunately, many older games - Half Life Episode 1, Neef For Speed \u200b\u200bMost Wanted, Spellforce 2, Dark Messiah, and others - only use second shaders for HDR effects. Newer games like Gothic 3 or Neverwinter Nights 2 use the previous Bloom method, as it did in Black & White 2. And while Neverwinter Nights 2 can be configured to support HDR rendering, the developer is wary of these features so that those with normal FPS can play who has the usual hardware installed. This is correctly implemented in Oblivion, which has both Bloom and outstanding HDR rendering effects through third shaders.

It also supports the fourth shaders (Shader Model 4.0), and the most important innovation is the changed architecture of the rendering pipeline. It is no longer divided into pixel and vertex shaders. The new shader core can handle all data - vertex, pixel, geometric and even physical. This did not hurt performance - Oblivion runs almost twice as fast as the pixel shader-optimized Radeon X1900 XTX or X1950 XTX.

What the video card supports in terms of DirectX 10 is not yet possible to test. Windows Vista, Direct X 10 and games for it don't exist yet. However, on paper, everything looks more than decent: geometry shaders support displacement mapping (Displacement Mapping), which will allow you to display even more realistic things, for example, render stereoscopic effects, objects in the form of gutters and corrugated surfaces. Stream Output will allow you to get even better shader effects for particles and physics. The technology of quantum effects (Quantum Effect) does a good job of calculating the effects of smoke, fog, fire and explosions, and will allow you to remove their calculations from the central processor. All this together will give significantly more shader and physics effects that can be seen in future games. How all this will be implemented in practice, in what games and in what form, the future will show.

GeForce 8800 GTX | Boards in the test

Video cards on nVidia
and chip Codename Memory HDR-R Top / pix. shaders GPU frequency Memory frequency
nVidia GeForce 8800 GTX G80 768 MB GDDR3 Yes 4.0 575 MHz 1800 MHz
Asus + Gigabyte GeForce 7900 GTX SLI G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz 1600 MHz
Gigabyte GeForce 7900 GTX G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz
nVidia GeForce 7950 GT G71 512 MB GDDR3 Yes 3.0/3.0 550 MHz 1400 MHz
Asus GeForce 7900 GT Top G71 256 MB GDDR3 Yes 3.0/3.0 520 MHz 1440 MHz
nVidia GeForce 7900 GS G71 256 MB GDDR3 Yes 3.0/3.0 450 MHz 1320 MHz
Asus GeForce 7800 GTX EX G70 256 MB GDDR3 Yes 3.0/3.0 430 MHz 1200 MHz
Gigabyte GeForce 7800 GT G70 256 MB GDDR3 Yes 3.0/3.0 400 MHz 1000 MHz
Asus GeForce 7600 GT G73 256 MB GDDR3 Yes 3.0/3.0 560 MHz 1400 MHz
nVidia GeForce 6800 GT NV45 256 MB GDDR3 Yes 3.0/3.0 350 MHz 1000 MHz
Gainward GeForce 7800 GS + GSa AGP G71 512 MB GDDR3 Yes 3.0/3.0 450 MHz 1250 MHz

The following table shows the ATi that participated in our testing.

ATi-based video cards
Video card and chip Codename Memory HDR-R Top / pix. shaders GPU frequency Memory frequency
Club 3D + Club 3D Radeon X1950 XTX CF R580 + 512 MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
Club 3D Radeon X1950 XTX R580 + 512 MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
HIS + HIS Radeon X1900 XTX CF R580 512 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Gigabyte Radeon X1900 XTX R580 512 MB GDDR3 Yes 3.0/3.0 648 MHz 1548 MHz
Power Color Radeon X1900 XT R580 512 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
ATI Radeon X1900 XT R580 256 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Sapphire Radeon X1900 GT R580 256 MB GDDR3 Yes 3.0/3.0 574 MHz 1188 MHz
HIS Radeon X1650 Pro Turbo RV535 256 MB GDDR3 Yes 3.0/3.0 621 MHz 1386 MHz
Gecube Radeon X1300 XT RV530 256 MB GDDR3 Yes 3.0/3.0 560 MHz 1386 MHz

GeForce 8800 GTX | Test configuration

We used three reference stands for testing. All of them were based on extremely identical components - a dual-core AMD Athlon 64 FX-60 processor with a frequency of 2.61 GHz, equipped with 2 gigabytes of Mushkin MB HP 3200 2-3-2 RAM, two 120 GB Hitachi hard drives in a RAID 0 configuration. The difference was in the motherboards used - for the tests of single and nVidia boards in SLI mode, we used the Asus A8N32-SLI Deluxe motherboard. To test video cards in CrossFire mode (this is indicated by the abbreviation CF below), we used the same computer with an ATi reference motherboard based on the RD580 chipset. Finally, AGP video cards were tested on a computer in the same configuration, but on an ASUS AV8 Deluxe motherboard. The configuration data is summarized in a table.

For all nVidia graphics cards (including SLI) and single Ati cards
CPU
Bus frequency 200 MHz
Motherboard Asus A8N32-SLI Deluxe
Chipset nVidia nForce4
Memory
HDD Hitachi 2 x 120GB SATA, 8MB Cache
DVD Gigabyte GO-D1600C
LAN controller Marvell
Sound controller Realtek AC97
Power Supply Silverstone SST-ST56ZF 560W
For tests of ATi video cards in CrossFire mode
CPU AMD Athlon 64 FX-60 Dual Core 2.61 GHz
Bus frequency 200 MHz
Motherboard Reference ATi
Chipset ATI RD580
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller AC97
For tests of AGP video cards
CPU AMD Athlon 64 FX-60 Dual Core 2.61 GHz
Bus frequency 200 MHz
Motherboard Asus AV8 Deluxe
Chipset VIA K8T800 Pro
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller Realtek AC97

On computers, we used Windows XP Professional with SP1a to test single video cards and nVidia cards in SLI mode. CrossFire and AGP video cards were tested on systems with Windows XP Professional SP2 installed. The driver and software versions are summarized in the following table.

Drivers and configuration
ATi graphics cards ATI Catalyst 6.6, X1900 XTX, X1950 + Crossfire, X1650 + Crossfire, X1300 XT + Crossfire, Crossfire X1900, Crossfire X1600 XT ATI Catalyst 6.7 (entspricht Catalyst 6.8), Crossfire X1600 Pro, Crossfire X1300 Pro, ATI Catalyst 6
NVidia graphics cards nVidia Forceware 91.31, 7900 GS, nVidia Forceware 91.47, 7950 GT nVidia Forceware 91.47 (Special), 8800 GTX nVidia Forceware 96.94 (Special)
operating system Single cards and SLI: Windows XP Pro SP1a, ATI Crossfire and AGP video cards Windows XP Pro SP2
DirectX version 9.0c
Chipset driver nVidia Nforce4 6.86, AGP VIA Hyperion Pro V509A

GeForce 8800 GTX | Test results

We received the reference board for THG directly from nVidia. For testing, we were provided with a special ForceWare 96.94 driver, prepared exclusively for the press. is a DirectX 10 and Shader Model 4.0 compatible card. The performance in applications for DirectX 9 and Pixelshader 2 or Pixelshader 3 is stunning.

Enabling anti-aliasing and anisotropic filtering has almost no performance impact. In Half-Life 2 Episode 1, the GeForce 8800 GTX graphics card cannot be slowed down. At 1600x1200, the chip is 60 percent faster than the Radeon X1950 XTX, and in Oblivion, performance is double that of the Radeon X1900 XTX or X1950 XTX. In Prey, the graphics card at 1600x1200 is a whopping 55 percent faster than the Radeon X1950 XTX. In Titan Quest, the number of frames per second does not change, whatever resolution you set, and is 55 FPS.

In Half-Life 2: Episode 1 tests with HDR rendering, the board's results are impressive, but at low resolutions it loses to the Radeon X1950 XTX and boards in CrossFire mode, being approximately on par with SLI solutions on the GeForce 7900 GTX. Note that the video card is not the limiting factor at low resolutions. The higher we tweak the settings, the more interesting the result.

With anti-aliasing and anisotropic filtering enabled, the picture starts to change. All the boards lose a little in performance, but the GeForce 8800 GTX drops very insignificantly - by only 10 fps on average, while the dual ATi Radeon X1950 XTX loses as much as 20 fps in CrossFire mode.

As soon as we step over the resolution of 1280x1024 with anti-aliasing and anisotropic filtering enabled, the single GeForce 8800 GTX becomes the undoubted leader. The performance exceeds those of the Radeon X1950 XTX by almost 35 fps. This is a significant difference.

Further more. At 1600x1200 with anti-aliasing and anisotropic filtering, the breakaway from all other boards becomes a nightmare. Twice from GeForce 7900 GTX SLI and slightly less from CrossFire Radeon X1950 XTX. This is yeah!

Finally, let's look at the dynamics of FPS decrease with increasing resolution and image quality. We see that the GeForce 8800 GTX has an insignificant decrease in performance - from bare 1024x768 to smoothed and filtered by the anisotropy method 1600x1200, the difference is just a little more than 20 fps. Previously, top-end solutions from ATi and nVidia go far back.

Hard Truck: Apocalypse is demanding on both the video card and the central processor. This explains virtually the same performance at 1024x768 when simple trilinear filtering is used and full-screen anti-aliasing is turned off.

As soon as you switch to 4xAA and 8x anisotropic filtering, the results start to vary. "Younger" cards lose performance significantly, as if they do not notice the improvement in picture quality.

At 1280x960 the difference increases even more, but the GeForce 8800 GTX demonstrates the same performance. It is clear that Athlon 64 FX-60 is not capable of bringing this video card to its knees.

At 1600x1200, all single motherboards start to tend to non-playable. But the GeForce 8800 GTX showed 51 fps as it does.

Consider the performance degradation with increasing settings. CrossFire Radeon X1950 XTX and GeForce 7900 GTX keep close to, and old generation cards have long been on their knees and begging for mercy.

In Oblivion, a game that loads the video card to the limit, the picture is initially depressing for all boards, except for the Radeon X1950 XTX in CrossFire mode and. We have collected statistics on the operation of video cards in open locations and when rendering indoor spaces. It can be seen that in the open air the GeForce 8800 GTX stands next to or slightly lags behind the dual Radeon X1950 XTX.






But when the resolution gets up to 1600x1200, our GeForce 8800 GTX goes far ahead. The gap is especially visible at closed levels.


Look at the decrease in performance as resolution and quality increase. The picture needs no comments. In closed locations, the speed is unshakable.


In the Prey game, the video card sits between the ATi Radeon X1950 XTX single-board solutions and the same cards in CrossFire mode. And the higher the resolution, the better the GeForce 8800 GTX looks.




It is useless to compare the GeForce 8800 GTX with single-board solutions from ATi and nVidia. The gap in high resolutions is enormous, and even at 1024x768 with anti-aliasing it is impressive.

In Rise of Nations: Rise of Legends, the graphics card is the only leader. If we calculate the gap between CrossFire Radeon X1950 XTX and GeForce 8800 GTX as a percentage, then the gap will be very, very large. If we count in fps, then the difference is not so noticeable, but still significant.




Notice how the speed decreases with increasing resolution. At all settings, the GeForce 8800 GTX is a leader not only in comparison with single boards, but also with SLI / CrossFire solutions.

In Titan Quest, nVidia's cards do their best. At the same time, fps does not change from 1024x768 to 1600x1200 with anti-aliasing and anisotropic filtering.




The picture of what is happening is well illustrated by the following graph. The performance of the GeForce 8800 GTX is at the same level, regardless of the settings.

In 3DMark06, the card performs excellently with both the second and third shaders. Note how slightly the performance drops when you enable anisotropy and anti-aliasing.


The increase in resolution is also not scary. The card is on par with SLI and CrossFire solutions, significantly outperforming all previous leaders in the single race.


To give you a better idea of \u200b\u200bgaming performance, we've re-arranged the graphics. There is no comparison here, only the net result of one video card. It should be noted that the performance of the GeForce 8800 GTX does not change from resolution to resolution. The limiting factor in all games is the insufficiently fast AMD Athlon 64 FX-60 processor. In the future, with the release of much faster chips, the card will perform even better in the same games. We think that the latest generation Core 2 Quad is not able to force the GeForce 8800 GTX to reach its limit.




So, having finished with the test results, let's try to compile a rating of the efficiency of video cards. To do this, we will collect together the results of all game tests and compare them with the price of the solution. We will take the recommended prices as a basis, that is, without the extra charges of specific stores. Of course, they will be very expensive at first, and many stores will price excess profits. But then prices will drop, and you will probably be able to get a GeForce 8800 GTX for a reasonable price pretty soon.

As we can see, the performance of the GeForce 8800 GTX bypasses almost all solutions, including dual CrossFire and SLI. In absolute terms, the GeForce 8800 GTX is very fast. But what about the price?

The price is appropriate - the manufacturer asks for a fee of 635 euros. That's a lot, but you will have to pay more for two Radeon X1900 XTX boards in CrossFire mode - 700 euros. And for two Radeon X1950 XTX or SLI GeForce 7900 GTX as much as 750 euros. Despite the fact that in some tests a single GeForce 8800 GTX bypasses these solutions and takes up less space in the case, there is something to think about.

Finally, let's split the fps into money. We see that this figure is better than that of SLI and CrossFire. Of course, the cost of each fps will be higher than that of the GeForce 7800 GTX EX, and, of course, noticeably higher than that of the Radeon X1300 XT. But the performance of the board is also adequate. A very effective solution in terms of price / performance ratio.

We decided to supplement our review with the test results of the American laboratory THG, where the GeForce 8800 GTS was also involved. Please note that due to differences in test configuration, do not directly compare the above results with those of the American laboratory.


The GeForce 8800GTX is longer than the Radeon X1950XTX and most other cards on the market. The 8800GTS is somewhat shorter.

Like other graphics card benchmarks in 2006, we tested it on the AMD Athlon FX-60 platform. We will also show the results of configurations with multiple GPUs. In addition, let's evaluate how new video cards behave when the performance is limited by the CPU (low resolution and image quality).

System hardware
Processors AMD Athlon 64 FX-60, 2.6GHz, 1.0GHz HTT, 1MB L2 cache
Platform nVidia: Asus AN832-SLI Premium, nVidia nForce4 SLI, BIOS 1205
Memory Corsair CMX1024-4400Pro, 2x 1024 MB DDR400 (CL3,0-4-4-8)
HDD Western Digital Raptor, WD1500ADFD, 150GB, 10,000 RPM, 16MB Cache, SATA150
Network Integrated nForce4 Gigabit Ethernet
Video cards ATi Radeon X1950XTX 512 MB GDDR4, 650 MHz core, 1000 MHz memory (2.00 GHz DDR)
NVidia cards:
nVidia GeForce 8800GTX 768 MB GDDR3, 575 MHz core, 1.350 GHz stream processors, 900 MHz memory (1.80 GHz DDR)
XFX GeForce 8800GTS 640 MB GDDR3, 500 MHz core, 1,200 GHz stream processors, 800 MHz memory (1.60 GHz DDR)
nVidia GeForce 7900GTX 512 MB GDDR3, 675 MHz core, 820 MHz memory (1.64 GHz DDR)
Power Supply PC Power & Cooling Turbo-Cool 1000W
Cooler CPU Zalman CNPS9700 LED
System software and drivers
OS Microsoft Windows XP Professional 5.10.2600, Service Pack 2
DirectX version 9.0c (4.09.0000.0904)
Graphics Drivers ATi - Catalyst 6.10 WHQL
nVidia - ForceWare 96.94 Beta

During the first 3DMark run, we ran tests at all resolutions, but with full-screen anti-aliasing and anisotropic filtering turned off. In the second run, we enabled the 4xAA and 8xAF image enhancement options.

nVidia clearly comes first in 3DMark05. The GeForce 8800 GTX achieves the same result at 2048x1536 as the ATi Radeon X1950 XTX at the default 1024x768. Impressive.

Doom 3 is usually dominated by nVidia cards as their designs fit well for this game. But not so long ago ATi was able to "take" this game with new cards.

Here, for the first time, we are faced with the limitations of the processing power of the CPU, since at low resolution the result is somewhere around 126 frames per second. The ATi card is capable of higher frames per second on this system configuration. The reason lies in the drivers. The point is that ATi releases drivers that are less CPU intensive. As a result, the CPU is in better conditions and can provide more performance for the graphics subsystem.

In general, the new 8800 cards become the winner. If you look at the results at all resolutions, the new DX10 cards outperform the Radeon X1950 XTX, starting from 1280x1024 and higher.

GeForce 8800 GTX and GTS | F.E.A.R.

In F.E.A.R. usually nVidia cards lead. But, again, the ATi drivers have less CPU load. Of course, with a faster platform, the results will be different, but if your computer is not advanced, then this test clearly shows how G80 cards will work on it. But apart from the test at 1024x768, the G80 simply kills the Radeon X1950 XTX. GTX is a monster. And no matter what load we put on the GeForce 8800 GTX, the card always provides more than 40 frames per second.


Click on the picture to enlarge.

The second screenshot (below) is taken on an 8800 GTX with the same settings.



Click on the picture to enlarge.

The nVidia picture is far superior in quality to the ATi screenshot. It seems that in this regard, nVidia again took the lead. We have one more advantage that nVidia cards based on the G80 chip have.


Here is a table of new quality improvement opportunities on the G80 cards.

In addition to the new DX10 graphics cards, nVidia also revealed several features that will be available on the G80 cards. And the first of them is a proprietary picture quality enhancement technology called Coverage Sampled Antialiasing (CSAA).

The new version of full-screen anti-aliasing uses an area of \u200b\u200b16 subsamples. As a result, according to nVidia, it is possible to compress "redundant color and depth information into memory space and bandwidth of four or eight multisamples." The new quality level works more efficiently by reducing the amount of data per sample. If CSAA doesn't work with any game, then the driver will revert to traditional anti-aliasing methods.


Click on the picture to enlarge.

Before we finish this review, let me talk about two more aspects of graphics cards that have been in development for a long time and will become more important over time. The first aspect is video playback. During the reign of the GeForce 7, ATi Radeon X1900 was the leader in video playback quality. But the situation has changed with the advent of unified shaders with a dedicated Pure Video core.

Thanks to smart algorithms and 128 computing units, the GeForce 8800 GTX was able to get 128 out of 130 points in HQV. In the near future, we plan to release a more detailed article on image quality, so stay tuned to our website for more news.

Finally, a very strong point of the G80 is what nVidia calls CUDA. For years, scientists and enthusiasts have been looking for ways to squeeze more performance out of powerful parallel processors. The Beowulf cluster, of course, is not affordable for everyone. Therefore, ordinary mortals offer different ways of how a video card can be used for calculations.

The problem here is the following: the GPU is good for parallel computing, but it does poorly with branching. This is where the CPU works well. Also, if you want to use a graphics card, then you should program shaders like game developers do. nVidia again decided to go ahead with the Compute Unified Device Architecture, or CUDA.


This is how CUDA can work for fluid simulation.

nvidia has released a C + compiler whose resulting programs can scale to the processing power of the GPU (for example, 96 stream processors in the 8800 GTS or 128 in the 8800 GTX). Now programmers have the ability to create programs that scale in terms of both CPU and GPU resources. CUDA is sure to appeal to a variety of distributed computing programs. However, CUDA can be used not only for calculating blocks, but also for simulating other effects: volumetric fluid, clothes and hair. Through CUDA, physics calculations and even other game aspects can potentially be transferred to the GPU.


Developers will be presented with a complete set of SDKs.

GeForce 8800 GTX and GTS | Conclusion

Those moving from GeForce 6 to now will get almost a threefold increase in performance. It doesn't matter when games for DirectX 10 are released, it doesn't matter what the fourth shaders will give us - today the GeForce 8800 GTX is the fastest chip. Games like Oblivion, Gothic 3 or Dark Messiah were waiting for the G80 chip and graphics cards. Play without brakes became possible again. The GeForce 8800 GTX has enough power for all the latest games.

The cooling system is quiet, the 80mm cooler on the reference card was unheard of. Even at full load, the rotational speed of the cooler is low. I wonder how ATi will respond to this. Anyway, nVidia has done a damn good job and released a really powerful piece of hardware.

Disadvantages: the board is 27 centimeters long, it takes the place of two PCI Express slots. The power supply must be at least 450 watts (12V, 30A). For the GeForce 8800 GTS, the minimum will be a 400-watt power supply with 30 amperes on the 12-volt bus.

Following a long tradition, nVidia cards are already available in online stores. On the international market, the recommended price for the GeForce 8800GTX is $ 599, and the GeForce 8800GTS is $ 449. And games for DX10 should appear soon. But no less important, you will get a better picture in existing games.


This is what a DX10 supermodel might look like. Click on the picture to enlarge.

GeForce 8800 GTX and GTS | Editor's opinion

Personally, I am impressed with nVidia's implementation of DX10 / D3D10. The live viewing of Crysis and the many demos is impressive. The CUDA implementation allows you to turn your graphics card into more than just a frame renderer. Now programs will be able to use not only CPU resources, but also all the parallel power of the universal GPU shader core. I can't wait to see such solutions in reality.

But the G80 leaves a lot to be desired. What? Of course, new games. Gentlemen developers, would you be so kind as to release the DX10 games as soon as possible.

GeForce 8800 GTX | Photo gallery

For more than a year that has passed since the release of video cards based on NVIDIA chips of the GeForce 8800 line, the situation on the graphics accelerators market has developed into an extremely unfavorable situation for the end customer. In fact, an overclocker who could shell out a tidy sum of money for top-end video cards simply had no alternative. A competitor from ATI (AMD) appeared later and, ultimately, could not compete with the GeForce 8800 GTX, and later the Ultra version of the NVIDIA GeForce 8800. Therefore, NVIDIA marketers easily realized that, in the absence of competition, they could reduce the cost of top video cards are not necessary at all. As a result, throughout this period the prices for the GeForce 8800 GTX and Ultra remained at the same very high level, and only a few could afford such video cards.

However, the upper price segment has never been the defining and priority segment for manufacturers of graphics chips and video cards. Yes, leadership in this class is certainly prestigious for any company, but from an economic point of view, the most profitable is the middle price range. Nevertheless, as the recent tests of AMD Radeon HD 3850 and 3870 claiming to dominate in the middle class have shown, the performance of such video cards is unsatisfactory for modern games and, in principle, unacceptable for their high-quality modes. NVIDIA GeForce 8800 GT is faster than this pair, but also falls short of comfort in DirectX 10 games. What follows after him, if there is an opportunity to pay extra? Until yesterday, there was practically nothing, since there is literally an abyss in price terms between GT and GTX, and that's it.

But technical progress does not stand still - the appearance of the new NVIDIA G92 chip, manufactured using 65-nm technology, allowed the company not only to attract overclockers with a quite successful GeForce 8800 GT video card, but also yesterday, on December 11 at 17:00 Moscow time, to announce a new product - GeForce 8800 GTS 512 MB... Despite the uncomplicated name of the video card, the new graphics accelerator has a number of significant differences from the usual version of the GeForce 8800 GTS. In today's article, we will introduce you to one of the first GeForce 8800 GTS 512 MB video cards appearing on the Russian market, check its temperature regime and overclocking potential, and, of course, examine the performance of the new product.

advertising

1. Technical characteristics of video cards participating in testing

The technical characteristics of the new product are presented to your attention in the following table in comparison with NVIDIA video cards of the GeForce 8800 family:

Name of technical
characteristics
NVIDIA GeForce
8800 GT 8800 GTS 8800 GTS
512 MB
8800
GTX / Ultra
GPU G92 (TSMC) G80 (TSMC) G92 (TSMC) G80 (TSMC)
Technological process, nm 65 (low-k) 90 (low-k) 65 (low-k) 90 (low-k)
Core area, mm2 330 484 330 484
Number of transistors, mln. 754 681 754 681
GPU frequency, MHz 600
(1512 shader)
513
(1188 shader)
650
(1625 shader)
575 / 612
(1350 / 1500
shader)
Effective operating frequency
video memory, MHz
1800 1584 1940 1800 / 2160
Memory size, MB 256 / 512 320 / 640 512 768
Supported memory type GDDR3
Memory bus width, bit 256
(4 x 64)
320 256
(4 x 64)
384
Interface PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
The number of unified shaders
processors, pcs.
112 96 128
Number of texture units, pcs. 56 (28) 24 64 (32) 32
The number of rasterization blocks (ROP's), pcs. 16 20 16 24
Pixel Shaders / Vertex version support
Shaders
4.0 / 4.0
Video memory bandwidth, GB / sec ~57.6 ~61.9 ~62.1 ~86.4 / ~103.7

shading, Gpix. / sec
~9.6 ~10.3 ~10.4 ~13.8 / ~14.7
Theoretical maximum speed
sampling textures, Gtex / sec
~33.6 ~24.0 ~41.6 ~36.8 / ~39.2
Peak power consumption in
3D operating mode, Watt
~106 ~180
Power supply requirements,
Watt
~400 ~400 ~400 ~450 / ~550
Dimensions of the reference video card
design, mm (L x B x T)
220 x 100 x 15 228 x 100 x 39 220 x 100 x 32 270 x 100 x 38
Outputs 2 x DVI-I
(Dual-Link),
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual-Link),
TV-Out,
HDTV-Out
2 x DVI-I
(Dual-Link),
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual-Link),
TV-Out,
HDTV-Out
Additionally sLI support
Recommended cost, USD 199 / 249 349 ~ 399 299~349 499~599 / 699

2. Review of BFG GeForce 8800 GTS 512 MB OC (BFGR88512GTSE)

The latest video card from a well-known overclocker company comes in a very compact box, decorated in dark colors.

Comparative testing of four GeForce 8800GTS 512 and 8800GT

Let's get acquainted with the GeForce 8800GTS 512 cards, compare them with the cheaper GeForce 8800GT and the veteran GeForce 8800GTX. Along the way, we are testing a new test bench and collecting flaws in drivers for DX10

With the release of a new series of video cards GeForce 8800GTS 512, NVIDIA has noticeably strengthened its positions. The novelty replaced the more expensive, hot and bulky GeForce 8800GTX, and the only drawback compared to its predecessor was a narrower 256-bit memory bus (versus 384 for the GeForce 8800GTX) and a smaller amount of memory equal to 512 MB (versus 768 MB for the GeForce 8800GTX) ... However, the novelty has undergone not only cuts, but also some improvements: the number of texture units has been increased from 32 to 64, which undoubtedly partly compensates for the simplifications in the map. Also, to compensate for the simplifications, the frequencies were increased compared to their predecessor, and the amount of video memory can be easily expanded to 1 GB by simply installing chips of larger capacity, which, by the way, have already begun to be done by some manufacturers. But, despite the fact that the GeForce 8800GTS 512 video card replaced the GeForce 8800GTX, its main competitor is not its predecessor, but the closest relative of the GeForce 8800GT, and all the salt lies in its lower price. The video cards GeForce 8800GTS 512 and GeForce 8800GT differ little from each other, since the GeForce 8800GT is a cut-down version of the GeForce 8800GTS 512 and, oddly enough, appeared on the market earlier than the full-fledged version. Both video cards are equipped with 512 MB of video memory and, as shown by today's research, they have the same memory. The main differences lie in the graphics processor, and specifically, in the GT version, some of its functional blocks are disabled. More details are given in the table below:

As you can see, GeForce 8800GT differs from its older sister in the reduced number of universal processors to 112 and the number of texture units reduced to 56. Initially, the cards also differ in clock frequencies, but this does not matter for our today's review, since almost all cards were factory overclocked. Let's find out how the differences on paper affected reality.

Leadtek 8800GTS 512

Designers from Leadtek chose a bright orange color to draw attention to their video card, and they were absolutely right: the new product will not go unnoticed.
The face of the novelty is an image of a scene from a fictional "shooter", under which are located the technical characteristics of the video card and a note about the bonus - the full version of the game Neverwinter Nights 2.
The reverse side of the box contains the characteristics of the video card, a list of the delivery set and standard information from NVIDIA.
  • splitter S-video\u003e S-video + component out;
  • adapter DVI\u003e D-sub;
  • CD with drivers;
  • CD with Power DVD 7 program;

The Leadtek 8800GTS 512 video card is based on the reference design, familiar to us from the GeForce 8800GT cards. Externally, the novelty is distinguished by a "two-story" cooling system, which, unlike its predecessor, throws hot air out of the computer. The advantages of such a solution are obvious, and the reason for using the improved cooling system is, most likely, not that the “new” chip heats up more, but that the buyer has every right to get a better product for a lot of money. Indeed, to be honest, the reference system in GeForce 8800GT does not do its job in the best way.
The reverse sides of the GeForce 8800GTS 512 and GeForce 8800GT look almost the same and differ in that the 8800GTS 512 version has all the elements mounted. However, we will be able to see the differences later on the example of the Leadtek 8800GT video card, but for now we will crawl under the hood of the new product.
Having removed the cooling system, we can again make sure that the boards are identical. However, pay attention to the right side of the board, where the power subsystem is located. Where the GeForce 8800GT is empty and only seats are located, the Leadtek 8800GTS 512 is densely populated with radio elements. It turns out that the GeForce 8800GTS 512 has a more complex power subsystem than the GeForce 8800GT. In principle, it is not surprising, because the GeForce 8800GTS 512 has higher operating frequencies, and, consequently, more stringent requirements for the power quality.
There are no external differences between the G92 chip in the Leadtek 8800GTS 512 and the G92 chip in the GeForce 8800GT video cards.
The new video card uses the same Qimonda chips with 1.0 ns access time as the GeForce 8800GT. A set of eight chips forms 512 MB of video memory. The nominal frequency for such chips is 2000 MHz DDR, but the real frequency set in the video card is slightly lower.
The cooling system for the video card is aluminum with a copper plate. This combination of two materials has been used for a long time and allows you to achieve the required efficiency with less weight and lower cost.
The processing of the copper "core" is at a satisfactory level, but no more.
After removing the casing from the cooling system, we have an amazing picture: as many as three heat pipes are engaged in removing heat from the copper base, which go to different parts of the radiator made of aluminum plates. This scheme serves for uniform heat distribution, and the large dimensions of the heatsink should have the best effect on the cooling quality, which cannot be said about the reference cooling system GeForce 8800GT. There are also three heat pipes, but their dimensions are noticeably smaller, as are the dimensions of the radiator itself.

Differences, overclocking and efficiency of the cooling system


Differences from GeForce 8800GT lie in the increased number of universal processors from 112 to 128, as well as in the frequencies of the entire GPU.
The Leadtek 8800GTS 512 frequencies correspond to the recommended ones and are equal to 650/1625 MHz for the GPU and 1944 MHz for the video memory.

Now - about the heating of the video card, which we will check using the Oblivion game with maximum settings.


The Leadtek 8800GTS 512 video card warmed up from 55 degrees at rest to 71 degrees, while the noise from the fan was practically inaudible. However, this was not enough for overclocking, and with the help of the same Riva Tuner we increased the fan speed to 50% of the possible maximum.
After that, the temperature of the GPU did not rise above 64 degrees, while the noise level remained at a low level. The Leadtek 8800GTS 512 video card was overclocked to 756/1890 MHz for the GPU and 2100 MHz for the video memory. Such high frequencies were inaccessible for GeForce 8800GT, most likely, due to the simplified power system.

Well, let's get acquainted with the next participant in our today's testing - the ASUS EN8800GTS TOP video card.

ASUS EN8800GTS TOP


When you look at the packaging from powerful video cards from ASUS, you may get the impression that this is not a video card at all, but, for example, a motherboard. It's all about the large size; for example, in our case, the box size is noticeably larger than that of the first participant in today's test. The large area of \u200b\u200bthe front side of the package made it possible to fit a large image of the proprietary archer girl and a considerable diagram showing 7% faster speed compared to the “regular” GeForce 8800GTS 512. The “TOP” abbreviation in the name of the video card means that it has been factory overclocked. The disadvantage of the packaging is that it is not obvious that the video card belongs to the GeForce 8800GTS 512 series, but, by and large, these are trifles. At first it is surprising that there is too little information on the box, however, the truth is revealed later, by itself, and in the literal sense.
As soon as you take the box by the handle, it opens like a book at the first breeze. The information under the cover is completely devoted to proprietary utilities from ASUS, in particular, ASUS Gamer OSD, which now can not only change brightness / contrast / color in real time, but also show the FPS value, as well as record video and take screenshots. The second described utility called Smart Doctor is designed to monitor the value of supply voltages and frequencies of a video card, and also allows you to overclock it. It should be said that the proprietary utility from ASUS can change two GPU frequencies, that is, the core and the shader unit. This brings it close to the famous Riva Tuner utility.
The reverse side of the box contains a little bit of everything, in particular, a brief description of the Video Security utility designed to use a computer as a "smart" video surveillance system in online mode.
The card is bundled according to the "nothing more" principle:
  • adapter for power supply of PCI-express cards;
  • s-video\u003e component out adapter;
  • adapter DVI\u003e D-sub;
  • bag for 16 discs;
  • CD with drivers;
  • CD with documentation;
  • brief instructions for installing a video card.

Outwardly, the video card is almost an exact copy of the Leadtek 8800GTS 512, and this is not surprising: both cards are based on the reference design and, most likely, were produced at the same factory by order of NVIDIA itself, and only then were sent to Leadtek and ASUS. Simply put, today, a card from Leadtek could well become a card from ASUS, and vice versa.
It is clear that the reverse side of the video card is also no different from that of the Leadtek 8800GTS 512, except that they have different branded stickers.
There is also nothing unusual under the cooling system. The power system on the right side of the board is fully assembled, in the center there is a G92 graphics processor with 128 active stream processors and eight memory chips, making up 512 MB in total.
The memory chips are manufactured by Qimonda and have a 1.0 ns access time, which corresponds to a frequency of 2000 MHz.
The appearance of the GPU does not reveal its noble origins, like the Leadtek 8800GTS 512.
The cooling system of the ASUS EN8800GTS TOP video card is exactly the same as that of the Leadtek 8800GTS 512 video card: a copper "core" is built into the aluminum heatsink to remove heat from the GPU.
The polishing quality of the copper core is satisfactory, like that of its predecessor.
The heat from the copper core is distributed over the aluminum fins using three copper heat pipes. We have already seen the effectiveness of this solution on the example of the first card.

Rated frequencies and overclocking

As we have already said, the TOP prefix after the name of the video card means its factory overclocking. The nominal frequencies of the new item are 740/1780 MHz for the GPU (versus 650/1625 MHz for Leadtek) and 2072 MHz for video memory (versus 1944 MHz for Leadtek). Note that for memory chips with 1.0 ns access time, the nominal clock frequency is 2000 MHz.

We managed to overclock the card to the same frequencies as the Leadtek 8800GTS 512: 756/1890 MHz for the GPU and 2100 MHz for the video memory with a fan speed of 50% of the maximum.

Well, now let's go down a notch and get acquainted with two video cards of the GeForce 8800GT class.

Leadtek 8800GT

The Leadtek 8800GT video card is a typical representative of the GeForce 8800GT series and, in fact, differs little from the majority. The whole point is that the GeForce 8800GT video cards are cheaper than the "advanced" GeForce 8800GTS 512, so they are no less interesting.
The box of the Leadtek 8800GT is almost the same as that of the more expensive 8800GTS 512. The differences are in the smaller thickness, the absence of a carrying handle and, of course, in the name of the video card. The inscription "extreme" after the name of the video card indicates its factory overclocking.
The back side of the box contains brief information about the video card, its advantages and a list of equipment. By the way, in our case there was no game Neverwinter Nights 2 and instructions for installing a video card.
The package includes new items:
  • adapter for power supply of PCI-express cards;
  • splitter S-video\u003e S-video + component out;
  • adapter DVI\u003e D-sub;
  • CD with drivers;
  • CD with Power DVD 7 program;
  • CD with the full version of Newervinter Nights 2;
  • brief instructions for installing a video card.

Leadtek 8800GT video card is made according to the reference design and externally differs only by a sticker on the casing of the cooling system.
The reverse side of the video card does not stand out either, however, after getting acquainted with the GeForce 8800GTS 512 video card, the missing row of chip capacitors on the left of the board draws attention.
The cooling system is made according to the reference design and is well known to us from previous reviews.
When examining the printed circuit board, the absence of elements on the right side of the card, which, as we have already seen, are mounted in the 8800GTS 512 version, attracts attention. Otherwise, it is a quite ordinary board with a G92 GPU cut to 112 stream processors and eight memory chips. in total constituting 512 MB.
Like the previous participants in today's testing, the Leadtek 8800GT memory chips are manufactured by Qimonda and have a 1.0 ns access time, which corresponds to 2000 MHz.

Rated frequencies and overclocking

As already mentioned, the Leadtek 8800GT video card is factory overclocked. Its nominal frequencies are 678/1700 MHz for the GPU and 2000 MHz for the video memory. Very good, however, despite such a considerable factory overclocking, the video card showed not the best result when manually overclocked, only 713/1782 MHz for the GPU and 2100 MHz for the video memory. Recall that the participants of the previous reviews were overclocked to frequencies of 740/1800 MHz for the video processor and 2000-2100 MHz for the video memory. We also note that we achieved this result at the maximum speed of the cooling system fan, since, as we have already said, the reference system in GeForce 8800GT does not do its job in the best way.

Now let's move on to the next participant in today's testing.

Palit 8800GT sonic


The face of the Palit 8800GT sonic video card is a battle frog in a spectacular design. Silly, but very funny! However, our life consists of nonsense, and remembering this once again does not hurt at all. Moving from fun to business, you should pay attention to the lower right corner, where there is a sticker indicating the frequencies of the video card and its other characteristics. The frequencies of the new item are almost the same as those of the GeForce 8800GTS 512: 650/1625 MHz for the GPU and 1900 MHz for the video memory, which is only 44 MHz less than that of the 8800GTS 512.
The reverse side of the box contains nothing remarkable, because everything interesting is located on the front side.
The package includes new items:
  • adapter for power supply of PCI-express cards;
  • s-video\u003e component out adapter;
  • s-video adapter\u003e tulip;
  • adapter DVI\u003e D-sub;
  • adapter DVI\u003e HDMI;
  • CD with drivers;
  • CD with the full version of the game Tomb Raider The Legend;
  • brief instructions for installing a video card.
It should be noted that this is the first video card of the GeForce 8800GT class with a DVI\u003e HDMI adapter, which has been in our test laboratory; earlier, only some video cards of the AMD Radeon family were equipped with such an adapter.
Here comes the first surprise! The Palit 8800GT sonic video card is based on a PCB of our own design and is equipped with a proprietary cooling system.
The reverse side of the video card also has some differences, but it is still difficult for us to judge the pros and cons of the new design. But we can fully judge the installation of video card components and their quality.
Since the height of the racks between the heatsink for the GPU and the board is less than the gap between them, and the heatsink is fastened with screws without any damping pads, the board itself and the graphics chip substrate are very bent. Unfortunately, this can lead to their damage, and the problem lies not in the strength of the PCB from which the board is made, but in the tracks, which can burst when pulled. However, it is not at all necessary that this will happen, but the manufacturer should pay more attention to attaching cooling systems to their video cards.
The cooling system is made of painted aluminum and consists of three parts - for the GPU, video memory and power subsystem. The heatsink base for the GPU does not shine with any special treatment, and a solid gray mass is used as a thermal interface.
Changes in the design of the printed circuit board affected the power subsystem, small elements were replaced with larger ones, and their layout was changed. As for the rest, we have before us the familiar GeForce 8800GT with the G92 graphics processor and eight video memory chips, making up 512 MB in total.
Like the rest of today's test participants, the memory chips are manufactured by Qimonda and have a 1.0 ns access time.

Cooling system efficiency and overclocking

We will check the efficiency of the proprietary cooling system used in the Palit 8800GT sonic using Oblivion with maximum settings, however, as always.


The video card warmed up from 51 to 61 degrees, which, in general, is a very good result. However, the fan speed noticeably increased, as a result of which the already not quiet cooling system became clearly audible against the general background. That is why it is difficult to recommend a video card from Palit for those who like quietness.

Despite the changes in the power subsystem and improved cooling, the Palit 8800GT sonic video card overclocked to the usual frequencies of 734/1782 MHz for the GPU and 2000 MHz for the video memory.

So we finished getting to know the participants of today's testing, and therefore we will move on to reviewing the test results.

Testing and conclusions

Today's testing differs not only in that we compare four video cards with each other, but also in that we produced it on a test bench different from the one you are familiar with, the configuration of which is as follows:

The change in the test platform is due to the fact that initially it was planned to test Leadtek 8800GTS 512 and ASUS EN8800GTS TOP video cards in SLI mode, but, unfortunately, by the end of the tests, the ASUS video card could not stand our mockery, and the idea collapsed. Therefore, we decided to transfer the SLI testing to a separate article as soon as we have the necessary hardware in our hands, but for now we will limit ourselves to tests of single video cards. We are going to compare seven video cards, one of which is overclocked to 756/1890/2100 MHz GeForce 8800GTS 512. For comparison, we added GeForce 8800GT and GeForce 8800GTX video cards operating at frequencies recommended by NVIDIA. To make it easier for you to navigate, we give a table with the clock frequencies of all test participants:

Name of the video card GPU frequency, core / shader unit, MHz Effective video memory frequency, MHz
Leadtek 8800GTS 512 650 / 1625 1944
ASUS EN8800GTS TOP 740 / 1780 2072
Leadtek 8800GT 678 / 1674 2000
Palit 8800GT 650 / 1625 1900
Overclocked GeForce 8800GTS 512 (on the diagram 8800GTS 512 756/1890/2100) 756 / 1890 2100
GeForce 8800GT (8800GT on the diagram) 600 / 1500 1800
GeForce 8800GTX (8800GTX on the diagram) 575 / 1350 1800

We used ForceWare 169.21 and ForceWare 169.25 drivers for Windows XP and Windows Vista, respectively. We traditionally start our acquaintance with the test results with the 3DMark tests:
Based on the 3DMark test results, of course, you can see who is stronger and who is weaker, but the difference is so small that there are no obvious leaders. Still, it is worth noting that the most expensive of the participants - the GeForce 8800GTX video card - took the last places. For the sake of completeness, you should familiarize yourself with the results of the game tests, which, as before, we produced with 4x anti-aliasing and 16x anisotropic filtering.
In Call of Duty 4, attention is drawn to the fact that the Leadtek 8800GT video card is almost on a par with the Leadtek 8800GTS 512, and the ASUS EN8800 TOP video card almost does not lag behind the overclocked GeForce 8800GTS 512. The Palit 8800GT video card is in the penultimate place, slightly bypassing the reference GeForce 8800GT. The winner is the GeForce 8800GTX video card, most likely, due to the wider (in comparison with other test participants) memory bus.
In the Call of Juarez game under Windows XP, the Leadtek 8800GTS 512 video card is almost on a par with the GeForce 8800GTX, which is no longer saved by a wider memory bus. Note the fact that the Leadtek 8800GT video card does not lag behind them, and at a resolution of 1024x768 it even overtakes, which is explained by the higher frequencies compared to the other two video cards. The leaders are the video card from ASUS and the overclocked GeForce 8800GTS 512, and in the penultimate place is again the video card from Palit, right after the GeForce 8800GT.
Call of Juarez on Windows Vista ran into problems with 1600x1200 resolution, which experienced large fluctuations in speed and very slow in some places. We assume that the problem lies in the lack of video memory in such a difficult mode, and whether or not we will check this in the next review using the example of the ASUS 8800GT video card with 1 GB of video memory. Let's notice right away that there were no problems with the GeForce 8800GTX. On the basis of the results in two lower resolutions, it can be seen that the alignment of forces has not practically changed compared to Windows XP, except that the GeForce 8800GTX reminded of its noble origin, but did not become a leader.
In the Crysis game under Windows XP, the alignment of forces has changed slightly, but in fact everything remains the same: the Leadtek 8800GTS 512 and Leadtek 8800GT video cards are at about the same level, the leaders are ASUS EN8800GTS TOP video cards and the overclocked GeForce 8800GTS 512, and the last place goes to the video card GeForce 8800GT. Also note that as the resolution grows, the gap between the overclocked GeForce 8800GTS 512 and GeForce 8800GTX shrinks due to the wider memory bus in the latter. However, higher clock speeds still prevail, and yesterday's champion remains out of work.
The problem in Windows Vista with a resolution of 1600x1200 did not bypass the Crysis game either, passing only the GeForce 8800GTX. As in the game Call of Juarez, there were jerks of speed and in some places a very strong drop in performance, sometimes below one frame per second. Based on the results in two lower resolutions, it can be seen that this time the Leadtek 8800GTS 512 video card outperformed its younger sister, taking third place. The first places were taken by the video cards ASUS EN8800GTS TOP, overclocked by GeForce 8800GTS 512 and GeForce 8800GTX, which finally took the lead at 1280x1024.
In Need for Speed \u200b\u200bPro Street Racing, the GeForce 8800GTX video card is in the lead, and at a resolution of 1024x768 - by a large margin. It is followed by the Leadtek 8800GTS 512 video card, followed by the ASUS EN8800GTS TOP and the overclocked GeForce 8800GTS 512, and the last places were taken by the GeForce 8800GT and Palit 8800GT sonic cards. Since the GeForce 8800GTX video card has become the leader, we can conclude that the game strongly depends on the video memory bandwidth. After that, we can assume why the overclocked versions of GeForce 8800GTS 512 turned out to be slower than the non-overclocked version. Apparently, this is due to the increased video memory delays due to the increase in its clock frequency.
In the Need for Speed \u200b\u200bCarbon game, we see a familiar picture: Leadtek 8800GTS 512 and Leadtek 8800GT are on a par, the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the first places, and the GeForce 8800GT takes the last place. The GeForce 8800GTX video card looks good, but nothing more.
In Oblivion, attention is drawn to the fact that at a resolution of 1024x768 the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the last places. We assumed that it was the memory delays that increased due to the increase in the frequency, and we were right: after lowering the memory frequency of the overclocked GeForce 8800GTS 512 video card to the nominal, it showed a result of over 100 frames per second. As the resolution grows, the situation bounces back and the former outsiders become leaders. By the way, attention is drawn to the fact that the Leadtek 8800GT outperforms the Leadtek 8800GTS 512, most likely, this is due to the high frequency of the shader unit.
The Prey game turned out to be undemanding to all video cards, and they settled down according to their clock speeds. Except that the GeForce 8800GTX behaved a little differently, but this is understandable, because it has a wider memory bus, and the game strongly depends on its bandwidth.

conclusions

The purpose of today's testing was to find out how much the video cards differ from each other, and how justified the high price for the "advanced" video card GeForce 8800GTS 512. Based on the results obtained, it can be seen that the cards are very close to each other, and this is despite the fact that The GeForce 8800GTS 512 surpasses the GeForce 8800GT in characteristics, including the active functional blocks inside the GPU. The obvious advantages of the new GeForce 8800GTS 512 video cards are a high-quality and quiet cooling system and a higher overclocking potential than the GeForce 8800GT. Special attention should be paid to the video card from ASUS, which, thanks to the factory overclocking, takes the leading position. Of course, you can overclock the card yourself, and, most likely, all GeForce 8800GTS 512 video cards will “take” the frequencies of the ASUS video card. On the whole, let us note once again that the new family of video cards based on the G92 graphics chips turned out to be very successful and may well replace the recent leader GeForce 8800GTX.

Pros and cons of individual video cards:

Leadtek 8800GTS 512

Pros:
  • good overclocking potential;
  • solid equipment;
  • bright and convenient packaging.
Minuses:
  • not noticed.

ASUS EN8800GTS TOP

  • Pros:
  • factory overclocking;
  • high-quality cooling system;
  • good overclocking potential.
Minuses:
  • too large and inconvenient packaging.

Leadtek 8800GT

Pros:
  • factory overclocking;
  • solid equipment.
Minuses:
  • not noticed.

Palit 8800GT sonic

Pros:
  • factory overclocking;
  • alternative cooling system;
  • solid equipment.
Minuses:
  • a strongly curved board in the GPU area;
  • noticeable fan noise.

Again 128 more powerful Californian shooters, but with trimmed spears (512MB and 256bit)

Part 1: Theory and Architecture

In the previous article dedicated to the release of a new mid-range solution Nvidia Geforce 8800 GT, based on the G92 chip, we mentioned that this solution uses a chip in which not all ALU and TMU execution units are unlocked, some of them are waiting in the wings. to be included in a graphics card of a different price level. And now the moment has come, Nvidia announced an updated version of the Geforce 8800 GTS, which retained the same name as the junior solution based on the G80. It is easiest to distinguish it by the amount of installed video memory, it is equal to 512 MB, in contrast to the previous 320 MB and 640 MB versions. That is how this model was named - Geforce 8800 GTS 512MB.

The new version of the Geforce 8800 GTS is based on the G92 chip already used in the Geforce 8800 GT, a video card of the so-called upper mid-range price level, so we already know the main features and characteristics. Unlike the two Geforce 8800 GT models with the recommended price from $ 200 to $ 250 (which does not correlate well with real prices at the moment, by the way), the new solution has the manufacturer's recommended price of $ 349-399. The peculiarities of the used video chip are support for only a 256-bit memory bus, but a greater number of unlocked universal execution units. Let's take a closer look at the new low-end high-end solution from Nvidia ...

Before reading this material, we recommend that you carefully read the basic theoretical materials DX Current, DX Next, and Longhorn, which describe various aspects of modern hardware graphics accelerators and architectural features of Nvidia and AMD products.

These materials accurately predicted the current situation with video chip architectures, and many assumptions about future solutions have come true. For more information on the Nvidia G8x / G9x unified architecture using previous chips as examples, see the following articles:

As we mentioned in the previous article, the G92 chip includes all the advantages of the G8x: a unified shader architecture, full DirectX 10 support, high-quality anisotropic filtering methods, and an anti-aliasing CSAA algorithm with up to sixteen samples. Some of the chip blocks are slightly different from those in the G80, but the main change compared to the G80 is the 65nm manufacturing technology, which has reduced production costs. Consider the characteristics of the GPU and new video solutions based on it:

Geforce 8800 GTS 512MB graphics accelerator

  • Chip codename G92
  • 65 nm technology
  • 754 million transistors (more than the G80)
  • Unified architecture with an array of shared processors for streaming vertex and pixel processing and other types of data
  • Hardware support for DirectX 10, including shader model - Shader Model 4.0, geometry generation and recording of intermediate data from shaders (stream output)
  • 256-bit memory bus, four independent controllers, 64-bit wide
  • Core frequency 650 MHz (Geforce 8800 GTS 512MB)
  • ALUs operate at more than double the frequency (1.625 GHz for Geforce 8800 GTS 512MB)
  • 128 scalar floating point ALUs (integer and floating point formats, support for FP 32-bit precision within the IEEE 754 standard, MAD + MUL without clock loss)
  • 64 texture addressing units with support for FP16 and FP32 components in textures
  • 64 blocks of bilinear filtering (as in G84 and G86, there is no free trilinear filtering and more effective anisotropic filtering)
  • Possibility of dynamic branching in pixel and vertex shaders
  • 4 wide ROP blocks (16 pixels) with support for antialiasing modes up to 16 samples per pixel, including FP16 or FP32 framebuffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, blending. Peak performance of the entire subsystem up to 64 MSAA samples (+ 64 Z) per cycle, in the Z only mode - 128 samples per cycle
  • Record results with up to 8 frame buffers simultaneously (MRT)
  • All interfaces (two RAMDACs, two Dual DVI, HDMI, HDTV) are integrated on the chip (in contrast to the external NVIO chip for Geforce 8800)

Specifications of the Geforce 8800 GTS 512MB Reference Card

  • Core frequency 650 MHz
  • Frequency of universal processors 1625 MHz
  • The number of universal processors 128
  • The number of texture units - 64, blending units - 16
  • Effective memory frequency 1.94 GHz (2 * 970 MHz)
  • GDDR3 memory type
  • Memory capacity 512 megabytes
  • Memory bandwidth 64.0 gigabytes per second.
  • Maximum theoretical fill rate of 10.4 gigapixel per second.
  • Theoretical texture sampling rate up to 41.6 gigatexels per second.
  • Two DVI-I Dual Link connectors, supports output at resolutions up to 2560x1600
  • SLI connector
  • PCI Express 2.0 bus
  • TV-Out, HDTV-Out, HDCP support
  • Recommended price $ 349-399

As you can see from the specifications, the new Geforce 8800 GTS 512MB version is quite different from the old ones. The number of execution units has increased: ALU and TMU, the GPU frequency has also increased significantly, including the frequency of shader units. Despite the stripped-down memory bus (256-bit versus 320-bit in the old versions), the memory bandwidth remained the same, since its operating frequency was raised to the corresponding value. As a result, the new GTS has significantly increased shader execution power as well as increased texture fetching speed. At the same time, fill rate and memory bandwidth remained the same.

Due to the changed bit width of the memory bus, the size of the latter can no longer be equal to 320 MB or 640 MB, only 256 MB, 512 MB or 1 GB. The first value is too small, it will obviously not be enough for a card of this class, and the last one is too high, the insignificant performance gain will hardly justify the increased price of such variants (which may well appear in the future). Therefore, Nvidia chose the middle variant with a bundle of 512 MB cards. That, as our recent research has shown, is the golden mean for all modern games that are very demanding on the amount of video memory and use up to 500-600 megabytes. We never tire of repeating that this does not mean that all game resources must necessarily be located only in the local memory of the video card, resource management can be given to API control, especially in Direct3D 10 with video memory virtualization.

Architecture

As it was written in the previous article on Geforce 8800 GT, we can say that the G92 is the previous flagship of the G80, transferred to a new technical process, but with some changes. The new chip has 8 large shader units and 64 texture units, as well as four wide ROPs. Despite all the changes for the better, the number of transistors in the chip seems to be too large, probably, the increased complexity of the chip is explained by the inclusion of a previously separate NVIO chip in its composition, as well as a new generation video processor. In addition, the number of transistors has been affected by the more complex TMUs, and there is a possibility of increasing the caches to provide more efficient 256-bit memory bus.

There are very few architectural changes in the G92 chip, we talked about all of them in the previous article, and we will not do it again. Everything that has been said in the reviews of previous solutions remains in effect, we will present only the main diagram of the G92 chip, now with all 128 universal processors:

Of all the changes in the chip, compared to the G80 - only a reduced number of ROPs and some changes in the TMU, which were written about in our previous material. Let us dwell once again on the fact that 64 texture units in Geforce 8800 GTS 512MB in real applications in most cases will NOT be stronger than 32 units in Geforce 8800 GTX. With trilinear and / or anisotropic filtering enabled, their performance will be approximately the same, since they have the same number of texture data filtering units. Of course, where unfiltered samples are used, the performance of solutions on the G92 will be higher.

PureVideo HD

One of the expected changes in the G92 is the second generation integrated video processor known from the G84 and G86, which has received expanded support for PureVideo HD. This version of the video processor almost completely offloads the CPU when decoding all types of video data, including the "heavy" H.264 and VC-1 formats. The G92 uses a new model of programmable PureVideo HD video processor, including the so-called BSP engine. The new processor supports decoding of H.264, VC-1 and MPEG-2 formats with a resolution of up to 1920x1080 and a bit rate of up to 30-40 Mbps, performing the work of decoding CABAC and CAVLC data in hardware, which allows you to play all existing HD-DVD and Blu -ray drives even on medium-powered single-core PCs. VC-1 decoding is not as efficient as H.264, but it is still supported by the new processor. You can read more about the second generation video processor in our reviews of G84 / G86 and G92, the links to which are given at the beginning of the article.

PCI Express 2.0

One of the real innovations in the G92 is support for the PCI Express 2.0 bus. The second version of PCI Express doubles the standard bandwidth, from 2.5 Gb / s to 5 Gb / s, as a result, the x16 connector can transfer data at speeds up to 8 GB / s in each direction, as opposed to 4 GB / s for version 1.x. It is very important that PCI Express 2.0 is compatible with PCI Express 1.1, and old video cards will work in new motherboards, and new video cards with support for the second version will remain functional in motherboards without its support. Provided there is sufficient external power supply and without increasing the interface bandwidth, of course.

The main competitor of Nvidia has estimated the real impact of higher PCI Express bus bandwidth on performance in its materials. According to them, a mid-range video card with 256 megabytes of local memory accelerates when moving from PCI Express 1.0 to 2.0 in modern games such as Company of Heroes, Call of Juarez, Lost Planet and World In Conflict by about 10%, the indicators change from 5 % up to 25% for different games and testing conditions. Naturally, speech at high resolutions, when the frame buffer and accompanying buffers take up most of the local video memory, and some resources are stored in the system memory.

To ensure backward compatibility with existing PCI Express 1.0 and 1.1 solutions, the 2.0 specification supports both 2.5 Gbps and 5 Gbps transfer rates. PCI Express 2.0 backward compatibility allows you to use legacy 2.5 Gbps solutions in 5.0 Gbps slots, which will operate at a lower speed, while a device designed according to version 2.0 specifications can support both 2.5 Gbps and 5 Gbps speeds. ... In theory, everything is fine with compatibility, but in practice, problems may arise with some combinations of motherboards and expansion cards.

External interface support

Everything here is the same as in the GeForce 8800 GT, there are no differences. An additional NVIO chip available on Geforce 8800 boards, which supports external interfaces outside the main (two 400 MHz RAMDAC, two Dual Link DVI (or LVDS), HDTV-Out), in this case was included in the chip itself, support for all these interfaces built into the G92 itself.

Geforce 8800 GTS 512MB video cards usually have two Dual Link DVI outputs with HDCP support. As far as HDMI is concerned, this connector is supported; it can be implemented by manufacturers on special-design cards. Although the presence of an HDMI connector on a video card is completely optional, it will be successfully replaced by the DVI to HDMI adapter that comes with most modern video cards.

Manufacturers have long been practicing the release of cheaper solutions based on high-end graphics processors. Thanks to this approach, the variation of ready-made solutions significantly increases, their cost decreases, and the majority of users often give preference to products with the most favorable price / performance ratio.
Similarly, NVIDIA has done the same with the latest G80 chip, the world's first GPU with a unified architecture and support for Microsoft's new API, DirectX 10.
Simultaneously with the flagship video card GeForce 8800 GTX, a cheaper version called the GeForce 8800 GTS was released. It differs from its older sister by the truncated number of pixel processors (96 versus 128), video memory (640 MB instead of 768 MB for the GTX). The consequence of the decrease in the number of memory chips was a decrease in the bit depth of its interface to 320 bits (for GTX - 384 bits). More detailed characteristics of the graphics adapter in question can be found by examining the table:

The ASUS EN8800GTS video card got into our Test laboratory, which we will consider today. This manufacturer is one of the largest and most successful partners of NVIDIA, and traditionally does not skimp on packaging and packaging design. As the saying goes, "there should be a lot of good video cards." The novelty comes in a box of impressive dimensions:


On its front side is a character from the game Ghost Recon: Advanced Warfighter. The case is not limited to one image - the game itself, as you might have guessed, is included. On the back of the package, there are brief product characteristics:


ASUS considered this amount of information insufficient, making a kind of book out of the box:


To be fair, we note that this method has been practiced for quite a long time and, by no means, not only by ASUS. But, as they say, everything is good in moderation. The maximum information content turned out to be a practical inconvenience. A slight breath of wind and the top of the cover opens. When transporting the hero of today's review, we had to contrive and bend the holding tab so that it justified its purpose. Unfortunately, folding it can easily damage the packaging. And finally, let's add that the dimensions of the box are unreasonably large, which causes some inconvenience.

Video adapter: packaging and close inspection

Well, let's go directly to the package bundle and the video card itself. The adapter is packed in an antistatic bag and a foam container, which eliminates both electrical and mechanical damage to the board. The box contains disks, DVI -\u003e D-Sub adapters, VIVO and additional power cords, as well as a case for disks.


Of the discs included in the kit, the GTI racing game and the 3DMark06 Advanced Edition benchmark are notable! 3DMark06 was spotted for the first time in the bundle of a serial and mass video card! Without a doubt, this fact will appeal to users who are actively involved in benchmarking.


Well, let's go directly to the video card. It is based on a reference design PCB with a reference cooling system, and is distinguished from other similar products only by a sticker with the manufacturer's logo, which retains the Ghost Recon theme.


The reverse side of the printed circuit board is also unremarkable - many smd components and voltage regulators are soldered on it, that's all:


Unlike the GeForce 8800 GTX, the GTS requires only one additional power connector:


In addition, it is shorter than its older sister, which is sure to please the owners of small bodies. In terms of cooling, there are no differences, and ASUS EN8800GTS, like the GF 8800 GTX, uses a cooler with a large turbine-type fan. The radiator is made of a copper base and an aluminum casing. Heat transfer from the base to the fins is carried out in part through heat pipes, which increases the overall efficiency of the structure. Hot air is thrown outside the system unit, but part of it, alas, remains inside the PC due to some holes in the casing of the cooling system.


However, the problem of strong heating is easily solved. For example, a low-speed 120mm fan improves the temperature conditions of the board quite well.
In addition to the graphics processor, the cooler cools the memory chips and power subsystem elements, as well as the video signal DAC (NVIO chip).


The latter was removed from the main processor due to the high frequencies of the latter, which caused interference and, as a result, interference in operation.
Unfortunately, this circumstance will cause difficulties when changing the cooler, so NVIDIA engineers simply had no right to make it of poor quality. Let's take a look at the video card in its "naked" form.


The PCB contains a G80 chip of revision A2, 640 MB of video memory, accumulated with ten Samsung chips. The memory access time is 1.2 ns, which is slightly faster than the GeForce 8800 GTX.


Please note that the board has two slots for chips. If they were soldered to the PCB, the total memory size would be 768 MB, and its capacity would be 384 bits. Alas, the video card developer considered such a step unnecessary. This scheme is used only in professional video cards of the Quadro series.
Finally, we note that the card has only one SLI slot, unlike the GF 8800 GTX, which has two.

Testing, analysis of results

The ASUS EN8800GTS video card was tested on a test bench with the following configuration:
  • processor - AMD Athlon 64 [email protected]MHz (Venice);
  • motherboard - ASUS A8N-SLI Deluxe, NVIDIA nForce 4 SLI chipset;
  • rAM - 2х512MB [email protected]MHz, timings 3.0-4-4-9-1T.
The tests were carried out in the operating system Windows XP SP2, the chipset driver version 6.86 was installed.
The RivaTuner utility has confirmed the compliance of the video card specifications with the declared ones:


The frequencies of the video processor are 510/1190 MHz, memory - 1600 MHz. The maximum heating achieved after multiple runs of the Canyon Flight test from the 3DMark06 package was 76 ° C at a fan speed of the standard cooling system equal to 1360 rpm:


For comparison, I will say that under the same conditions the GeForce 6800 Ultra AGP that came to hand heated up to 85 ° C at the maximum fan speed, and after a long time it froze altogether.

The performance of the new video adapter was tested using popular synthetic benchmarks and some gaming applications.

Testing by Futuremark development applications revealed the following:


Of course, on a system with a more powerful central processor, for example, a representative of the Intel Core 2 Duo architecture, the result would be better. In our case, the morally outdated Athlon 64 (even if overclocked) does not allow the full potential of today's top video cards to be fully revealed.

Let's move on to testing in real gaming applications.


Need for Speed \u200b\u200bCarbon clearly shows the difference between the rivals, and the GeForce 7900 GTX lags behind the 8800 generation cards more than noticeably.


Since Half Life 2 requires not only a powerful video card, but also a fast processor to play comfortably, a clear difference in performance is observed only at maximum resolutions with enabled anisotropic filtering and full-screen anti-aliasing.


In F.E.A.R. the picture is approximately the same as in HL2.


In the heavy modes of Doom 3, the card in question performed very well, but the weak central processor does not allow us to fully assess the gap between the GeForce 8800 GTS and its older sister.


Since Pray is made on the Quake 4 engine, which in turn is a development of Doom3, the performance results of video cards in these games are similar.
The progressiveness of the new unified shader architecture and some "cutbacks" in the capabilities of its older sister put the GeForce 8800 GTS between the fastest graphics adapter from NVIDIA today and the flagship of the seven thousandth line. However, the Californians would hardly have acted differently - a novelty of this class should be more powerful than its predecessors. I am glad that the GeForce 8800 GTS is much closer to the GeForce 8800 GTX in speed capabilities than to the 7900 GTX. The support for the newest graphics technologies also inspires optimism, which should leave the owners of such adapters with a good margin of performance for the near (and, hopefully, more distant) future.

Verdict

After examining the card, we had an extremely good impression, which was greatly improved by the product cost factor. So, at the time of its appearance on the market and some time later, ASUS EN8800GTS, according to price.ru, cost about 16,000 rubles - its price was clearly overstated. Now the card is sold for about 11,500 rubles for a long period, which does not exceed the cost of similar products from competitors. However, considering the package bundle, ASUS 'brainchild is undoubtedly in a winning position.

pros:

  • directX 10 support;
  • reinforced chip structure (unified architecture);
  • excellent performance level;
  • rich equipment;
  • famous brand;
  • the price is on par with products from less reputable competitors
Minuses:
  • not always handy big box
We are grateful to the Russian representative office of ASUS for the video card provided for testing.

Reviews, wishes and comments on this material are accepted in the forum website.

Did you like the article? To share with friends: