• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Did Nvidia just kill the game ?

DSJR

Major Contributor
Joined
Jan 27, 2020
Messages
3,404
Likes
4,560
Location
Suffolk Coastal, UK
My Gawd, I thought high end audio was bad o_O
 

kn0ppers

Active Member
Joined
Jul 4, 2019
Messages
192
Likes
258
Location
Germany
I was thinking about sending my watercooled Titan X Pascal into retirement and replace it with a 3080, but nah. I barely find time to play games, if I do I don't use raytracing and the other 3D stuff I do barely gets my Titan to break a sweat. And I would have to get a new waterblock when everyone else is probably trying to get one as well...so availability will probably be suboptimal.
 

bigjacko

Addicted to Fun and Learning
Joined
Sep 18, 2019
Messages
722
Likes
360
For me AMD still get a bit better price to performance with 5700 and 5700XT, although 3070 will be better but it is next gen card. Straight up performance Nvidia still wins, hope AMD can bring some better high end card.
 

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,702
Location
Chicago
I understand the call from something new and with more power... But what are the use cases. I mean, what is it for? Do you think your job will be changed with faster rendering or things like that?
Main use case is specifically 4K gaming on the highest graphics settings, or maybe 1440p if you want a big FPS increase. The 3090 also claims to support 8K 60 FPS.
 

Vasr

Major Contributor
Joined
Jun 27, 2020
Messages
1,409
Likes
1,926
My Gawd, I thought high end audio was bad o_O

Not as bad as high end audio because the improvements in moving up are rather chunky and in big leaps (in frame rate or resolution), so you don't need DBT equivalents to prove improvements.

It is not like spending money going from 1080p to 1081p or from 60fps to 60.0007fps like what happens in audio. :)

PS: Although the efforts in overclocking a card may be equivalent to Audio SINAD chasing.
 

DSJR

Major Contributor
Joined
Jan 27, 2020
Messages
3,404
Likes
4,560
Location
Suffolk Coastal, UK
My son games, but I have no interest at all and am more concerned about wireless dropouts to the CCA and my old laptop hanging on streaming TV programs on 'catchup' - so forget resolution as it's irrelevant. I'm just grateful that 'your' computer cast-offs eventually come tricking down to peeps like me :D - same with audio really as I realise the newest part of my 'main' stereo is thirteen years old (speakers) and everything else is thirty plus years old, the amps dating from the early 70's - EEK!!!!!!!
 

JohnYang1997

Master Contributor
Technical Expert
Audio Company
Joined
Dec 28, 2018
Messages
7,175
Likes
18,300
Location
China
We got ears and at least transducers as limit in audio reproduction.
It's no where near a limit in what benefits can bring from better speced PCs. There will always better visual games, higher resolution and fresh rate (only thing close to human limit but not yet) monitors. There's will always better needs for faster encoding, compiling etc. Completely different.
If talking about something to spend money then yes, anything can cost money.
 

sweetchaos

Major Contributor
The Curator
Joined
Nov 29, 2019
Messages
3,917
Likes
12,119
Location
BC, Canada
Someone above asked about what the use case of GPUs are....

Apologies for long post, in advance.

There's 2 types of users who use GPUs:
1. Gamers
2. Non-Gamers (aka people who work and require GPUs to do their job)

>>>>>>>>>>>>>>>>>>>>>>>>>

For gamers, it's a no-brainer.
You get the best possible graphics card you can afford, in order to get an edge in framerate so that you can p0wn no0bz and brag to your friends, of course!

>>>>>>>>>>>>>>>>>>>>>>>>>

In a workplace, it's a different mentality.
There's 3 types of GPUs that are used, depending on the type of work being performed.
Gaming GPUs is one type, but there's 2 more.
Think about it for a minute and see if you can come up with an answer.
This is the part where most people will blank out, if they don't understand the technology.
Now, read on.

>>>>>>>>>>>>>>>>>>>>>>>>>
Let's define 3 types of graphics card:
1. Gaming GPUs
2. Workstation GPUs
3. Data Center GPUs

Give yourself a cookie if you guessed correctly.

Here's some example from each, so you have a reference:
1. Gaming GPUs
- ex: AMD Radeon RX 5000 series
- ex: Nvidia GeForce 30 series (3070, 3080, 3090 announced earlier)
2. Workstation GPUs
- ex: AMD Radeon Pro Vega
- ex: Nvidia Quadro RTX
3. Data Center GPUs
- ex. Nvidia Tesla
>>>>>>>>>>>>>>>>>>>>>>>>>

Just like there's a specific tool for handyman/electrician to do his job, there's a specific GPU for the type of work being done.

First, let's define the types of work, you'll see in the real world:
I'll try to break it down into categories first, and then talk about which GPUs are used where.
1. Content Creation
2. Engineering
3. Data Science
>>>>>>>>>>>>>>>>>>>>>>>>>

Now, let's add the type of workload for each category:
1. Content Creation
1a. Photography
1b. Post Production
1c. 3D Design and Animation
1d. Rendering
2. Engineering
2a. CAD
2b. Photogrammetry
3. Data Science
3a. Machine Learning AI / TensorFlow
3b. Machine Learning Development
>>>>>>>>>>>>>>>>>>>>>>>>>

Now, let's add the software that are used for each workload:
1. Content Creation
1a. Photography
- Adobe Lightroom Classic
- Adobe Photoshop
1b. Post Production
- Adobe After Effects
- Adobe Premiere Pro
- DaVinci Resolve
1c. 3D Design and Animation
- Audodesk 3ds Max
- Autodesk Maya
- Cinema 4D
- Unity
- Unreal Engine
1d. Rendering
- OTOY OctaneRender
- Redshift
- V-Ray
2. Engineering
2a. CAD
- Autodesk AutoCAD
- Autodesk Inventor
- Autodesk Revit
- IrisVR Prospect
- Solidworks
2b. Photogrammetry
- Agisoft Metashape
- Pix4D
- RealityCapture
3. Data Science
3a. Machine Learning AI / TensorFlow
3b. Machine Learning Development
>>>>>>>>>>>>>>>>>>>>>>>>>

Now, remember what I said about having the right tool for the job?
Well, a workstation GPU can do the job of a gaming GPU (assuming it supports the same APIs), but the cost will be different.
All these GPUs have their own performance/cost ratio, so choosing the best GPU for the software is what's done in the real world.

Some software applications were written to take advantage of a workstation GPU, and some of a normal GPU like the gaming GPU. You get the idea.

Let's break down which GPU will fit into each of the software above.

>>>>>>>>>>>>>>>>>>>>>>>>>
Complete breakdown of each software and their recommended GPUs:
- Adobe Lightroom Classic => Lightroom Classic cannot effectively utilize a high-end GPU at the moment, so spending money of a high-end GPU will get you little or no increase in performance, as opposed to spending money on a better processor (CPU).

- Adobe Photoshop =>Although Adobe is constantly expanding GPU acceleration support to Photoshop, the current demand on the video card is actually relatively light. Even an entry video card will be able to provide a huge boost in performance for GPU accelerated effects but there is a sharp drop in performance benefit by using anything more than a mid-range video card. A few tasks may be able to see a performance benefit to using a high-end card like the RTX 2080, but a GTX 1060 or RTX 2070 is going to get you within a few percent of the best performance possible.

- Adobe After Effects => For After Effects, it is extremely important to have a supported GPU, but the actual performance of that card will not make a major impact on performance. Compared to the high-end RTX 2080 Ti, even a GTX 1060 is only about 6% slower. Once you get to a GTX 1070 Ti, the difference shrinks further to only a few percent.

- Adobe Premiere Pro => Premiere Pro benefits greatly from using a GPU, but which card is best depends on how many GPU-accelerated effects you use. If you only use a few, even a RTX 2060 SUPER should perform about as well as an RTX 2080 Ti. However, the more GPU-accelerated effects you use, the greater the benefit to using a higher-end card.

- DaVinci Resolve =>DaVinci Resolve benefits greatly from using a GPU and currently the NVIDIA RTX 2080 Ti is the best card you can get for Resolve - followed closely by the RTX 2080. If you are on a bit more of a budget, the AMD Radeon Vega cards do very well for their price, although even the Radeon Vega (the fastest Radeon card currently offered by AMD) is easily beaten by the more expensive NVIDIA cards.

- Audodesk 3ds Max =>When creating, editing, and animating models in 3ds Max, the video card is a large part of how many frames per second (FPS) the viewport is able to display the model at. A higher FPS will result in a smoother and overall better experience when rotating, zooming, or panning around the model you are working on. In general, 30 FPS is considered a minimum acceptable framerate, while 60 FPS is ideal. With recent versions of 3ds Max, Autodesk has included both Quadro and GeForce cards in their lists of supported GPUs. However, they differentiate between the two in that they call the Quadro cards "Certified" while the GeForce are only "Tested". While GeForce cards can work well in 3ds Max, Autodesk's official policy is that they "only recommend and support the professional NVIDIA Quadro and AMD FirePro graphics family cards". Because of this, our recommended systems both default to NVIDIA Quadro models. For the times when using a GeForce card takes priority over official Autodesk support, like game development or GPU-based rendering, we do list GeForce options as well.

- Autodesk Maya => When creating, editing, and animating models in Maya, the video card is a large part of how many frames per
second (FPS) the viewport is able to display the model at. A higher FPS will result in a smoother and overall better experience when rotating, zooming, or panning around the model you are working on. In general, 30 FPS is considered a minimum acceptable framerate, while 60 FPS is ideal. With recent versions of Maya, Autodesk has included both Quadro and GeForce cards in their lists of supported GPUs. However, they differentiate between the two in that they call the Quadro cards "Certified" while the GeForce are only "Tested". Because of this, our recommended systems both default to NVIDIA Quadro models. For the times when using a GeForce card takes priority over official Autodesk support, like game development or GPU-based rendering, we do list GeForce options as well.

- Cinema 4D =>When creating, editing, and animating models in Cinema 4D, the video card is a large part of how many frames per second (FPS) the viewport is able to display the model at. A higher FPS will result in a smoother and overall better experience when rotating, zooming, or panning around the model you are working on. In general, 30 FPS is considered a minimum acceptable framerate, while 60 FPS is ideal. While GeForce cards can work well in Cinema 4D, NVIDIA typically recommends using Quadro cards in professional graphics applications. Because of this, our recommended systems default to NVIDIA Quadro video cards. However, for the times when using a GeForce card makes more sense, such as game development using the Unity or Unreal game engine, we do list GeForce options as well.

- Unity => Currently, Unity utilizes the video card solely to display the graphics on the screen. Many applications in other fields have begun using the GPU for other tasks as well, but this has not yet been implemented in the Unity editor. Because of this, a faster video card will give you a higher FPS in the viewport or in a stand-alone game, but likely will not improve your productivity in other tasks. We currently offer three video cards on our Unity workstations depending on your budget and whether you are planning on developing VR content: 1. NVIDIA GeForce RTX 2070 SUPER 8GB - This GPU offers great performance for its price, and has plenty of power to handle multiple displays without a problem. 2. NVIDIA GeForce RTX 2080 Ti 11GB - With 11GB of VRAM and terrific performance for the price, the RTX 2080 Ti is one of the best GPUs to use for video game development and our recommendation if you plan on developing VR content. The high amount of VRAM makes it suitable for workstation with three or even four 4K displays and the extra power is great for games that have not been optimized.

- Unreal Engine =>Currently, Unreal Engine utilizes the video card solely to display the graphics on the screen. Many applications in other fields have begun using the GPU for other tasks as well, but this has not yet been implemented in the Unreal Editor. Because of this, a faster video card will give you a higher FPS in the viewport or in a stand-alone game, but likely will not improve your productivity in other tasks. We currently offer three video cards on our Unreal Engine workstations depending on your budget and whether you are planning on developing VR content: 1. NVIDIA GeForce RTX 2070 SUPER 8GB - This GPU offers great performance for its price, and has plenty of power to handle multiple displays without a problem. 2. NVIDIA GeForce RTX 2080 Ti 11GB - With 11GB of VRAM and terrific performance for the price, the RTX 2080 Ti is one of the best GPUs to use for video game development and our recommendation if you plan on developing VR content. The high amount of VRAM makes it suitable for workstation with three or even four 4K displays and the extra power is great for games that have not been optimized.

- OTOY OctaneRender => The video card selection is the driving factor for performance in OctaneRender. There are two aspects of a video card that impact render capabilities: the raw speed of the GPU itself and the amount of memory on the card. Video memory will limit how large and complex of scenes can be rendered GeForce cards tend to have good raw performance, with decent amounts of video memory, while Quadro cards come with larger amounts of VRAM but also cost far more for the same level of raw performance. As such, we recommend GeForce series cards for most Octane users. 1. GeForce RTX 2070 SUPER 8GB - The RTX 2070 SUPER is an upgrade from the RTX 2070, which in turn was a replacement for the older GTX 1080. The newer features in the RTX series, particularly RT cores (hardware based ray tracing), make the older 1000-series obsolete - while the added performance in the 2070 SUPER puts it in the same performance range as the RTX 2080 and 2080 SUPER at a lower price. 2. GeForce RTX 2080 Ti 11GB - Almost 30% faster than the GTX 1080 Ti and RTX 2080, this card is a fantastic choice for OctaneRender! It comes very close to the performance of the Titan RTX for less than half the price.

- Redshift => As mentioned above, the video card selection is the driving factor for performance in Redshift. The faster the better, and you can also use multiple GPUs to further speed up rendering. There are two aspects of a video card that impact render capabilities: the raw speed of the GPU itself and the amount of memory on the card. Video memory will limit how large and complex of scenes can be rendered effectively, though Redshift does support "out of core" rendering which will allow system memory to be used if there is not enough dedicated GPU memory available... but that comes with a reduction in speed, so it is best to get video cards with enough RAM onboard if at all possible. GeForce cards tend to have good raw performance, with decent amounts of video memory, while Quadro cards come with larger amounts of VRAM but also cost more for the same level of raw performance. 1. GeForce RTX 2070 SUPER 8GB - The RTX 2070 SUPER is an upgrade from the RTX 2070, which in turn was a replacement for the older GTX 1080 - with similar performance in Redshift but newer features. Those newer features, particularly RT cores (hardware based ray tracing), make the older 1000-series obsolete... while the increased speed of the 2070 SUPER puts it in the same performance range as the 2080 and 2080 SUPER but at a lower cost. 2. GeForce RTX 2080 Ti 11GB - About 30% faster than the GTX 1080 Ti and RTX 2080, this card is a fantastic choice for Redshift! It comes very close to the performance of the Titan RTX for less than half the price.

- V-Ray => As with the CPU recommendation above, the choice here depends heavily on which version of V-Ray you plan to use. For V-Ray Adv, nothing special is needed from the video card. Your best bet there would be to select a card that is appropriate for whatever other software you plan to run alongside: Cinema 4D, Maya, 3ds Max, etc. However, for V-Ray RT the video card selection is the biggest single factor in rendering speed / performance. RT has a couple of different modes, though not all plugin versions support both. An OpenGL mode exists in some versions for use with AMD graphics cards, but the main focus is on the CUDA mode for NVIDIA cards. We have tested that with up to four GPUs and found the scaling to be quite good. Faster cards also perform better, of course, so it really is a balancing act to find the combination of cards that best fit your budget. 1. GeForce RTX 2060 SUPER 8GB - Generally speaking, the RTX 2060 SUPER is a solid starting point - as fast as the older 1080 Ti, and with as much VRAM as the 2070 and 2080 (not Ti) variants, but for a lower price. 2. GeForce RTX 2080 Ti 11GB - Our go-to recommendation for most GPU rendering customers, the RTX 2080 Ti provides the best performance before moving up to the Titan series - while also having the RT cores that are emblematic of this GPU generation. It also has nearly as much VRAM: 11GB vs 12GB on the Titan V.

- Autodesk AutoCAD => For AutoCAD, the video card is what handles displaying the 2D and 3D models on the screen. Only 3D models require anything more than a basic GPU, though, so if you will only be working with 2D models then you are better off saving money on the GPU and putting that money towards a faster CPU, SSD, or more RAM. Either way, we recommend using a workstation NVIDIA Quadro card. Mainstream GeForce cards can technically get you better performance for your dollar, although the downside is that they are not officially certified for use in AutoCAD by Autodesk. Because of this, we highly recommend using a Quadro card in any professional environment to ensure that you will be able to get full support from Autodesk if you ever have a software issue. In most situations, the faster the video card the better performance (in terms of frames per second) you will get when working with a 3D model. However, we have found that, except in extreme situations, there is little to no noticeable benefit to using anything faster than a Quadro P2000. AutoCAD is also very light on VRAM usage, so there is no reason to pay out for a card with lots of VRAM for strictly AutoCAD use.

- Autodesk Inventor => The video card handles displaying the 2D and 3D models on the screen in Inventor. While mainstream GeForce cards can technically give you better performance per dollar, Autodesk only officially certifies and recommends workstation cards such as those in NVIDIA's Quadro product line. Because of this, we highly recommend using a Quadro card in any professional environment to ensure that you will be able to get full support from Autodesk if you ever have a software issue. In most situations, the faster the video card the better performance (in terms of frames per second) you will get when working with 3D models and assemblies. In general, a Quadro P2000 5GB is great for small assemblies, while a Quadro P4000 8GB will be better for medium assemblies. If you work with very large assemblies, you may even consider a Quadro P5000 16GB.

- Autodesk Revit => When working with models in Revit, the video card is solely used to display the model on the screen. While a more powerful video card may allow the model to be drawn at a higher FPS (frames per second) when rotating, zooming, or panning around the model, the video card requirements for Revit are relatively low. For most users, a mid-range card (such as a Quadro P2200 or RTX4000) will be more than powerful enough. Between consumer and professional video cards, Autodesk's official policy is that they "only recommend and support the professional NVIDIA Quadro and AMD FirePro graphics family cards" [source]. Because of this, our recommended systems default to NVIDIA Quadro video cards but in some situations (such as VR visualization) a consumer GeForce card may be a better option. However, be aware that these consumer cards are not quite as reliable as the professional cards and do not have official Autodesk support.

- IrisVR Prospect => Having a powerful video card is critical for virtual reality performance as it directly impacts the ability of the computer to keep up with the high resolution and frame rate which a good VR experience requires. The initial release versions of the HTC Vive and Oculus Rift both have dual displays, one for each eye, at 1080x1200 resolution. These run at 90Hz, which is 50% higher than the standard refresh rate of monitors, and the perspective of each eye is slightly different so it is actually rendering two distinct views rather than just a single, larger display. Some VR head mounted displays also require off-screen rendering of an area around the actual display that each eye is given, or allow super sampling for better image quality, both of which require further resources from the video card. Within NVIDIA's GeForce RTX series, the RTX 2070 8GB is a solid choice for VR. If you want to use a higher resolution headset like the Vive Pro, or plan to work with particularly large 3D models, then going to the RTX 2080 8GB is a decent upgrade - and the 2080 Ti 11GB is about as high as you'd want to go currently.

- Solidworks => Dassault Systemes advises their customers to use a workstation card, and there are actually some significant performance advantages to doing so. We found that in some situations a low-end Quadro can outperform "faster" GeForce cards. In addition, using a certified card is the only way to get official support for features like Realview and Ambient Occlusion. Prior to SOLIDWORKS 2019, there was very little performance difference between different Quadro video cards - except at the very high end, with extremely complex assemblies on high resolution monitors. However, there is a new feature that debuted in SW 2019 called “Enhanced graphics performance” mode. With that enabled, the video card takes on much more of the work involved when displaying parts and assemblies. That also leads to a much greater difference in frame rates with different Quadro cards, which can be seen in the graphs below.

- Agisoft Metashape => While the CPU impacts almost every step in Metashape, some parts of the workflow are GPU-accelerated as well. Most notably, aligning photos and building depth maps - as well as the mesh, if using depth maps as the source - are all impacted by the speed of your video card. Multiple video cards can also help boost performance in some projects, especially when using depth maps heavily. 2 x GeForce RTX 2080 Ti 11GB - For the best performance in Metashape, if you are using depth maps as the source data for creating meshes, we recommend a pair of GeForce RTX 2080 Ti cards. They will outperform the Titan RTX in this situation for about the same price, but they are not cheap. Any GeForce RTX Series Card - If you are not going to use the depth maps for building meshes, or if you are but dual 2080 Ti cards are too expensive, then literally any other RTX series card will do just fine. They are only a few percent apart in terms of performance, so get the one that best suits your budget.

- Pix4D => While most of the processing in Pix4D is done on the CPU, having a CUDA-compatible GPU can speed up processing in some parts of the calculations. Because of the CUDA requirement, only NVIDIA graphics cards can be utilized for this boost - and moreover, only a single GPU can be used at a time. That means one mid-range to high-end video card is all that you need for Pix4D. 1. GeForce RTX 2070 8GB - This is our go-to recommendation for Pix4D, as it performs in line with higher-end cards like the RTX 2080 and Titan RTX while costing less. 2. GeForce RTX 2060 6GB - This slightly lower model video card isn't all that much slower than the RTX 2070 in Pix4D, so if you need to save some money it is a solid choice.

- RealityCapture => RealityCapture requires a NVIDIA graphics card for full operation because it uses CUDA for some of the key processing. Without that, you can technically run the program and perform some basic steps like registering images - but you cannot create a mesh / 3D model. We found that there isn't a huge difference between modern mid-range and high-end video cards, but there is a little bit of a performance gain from spending more. However, using two video cards instead of one also provides a sizable benefit. As such, our recommendation is to use two video cards if your budget allows - and these are some of the sweet spots:
2 x GeForce RTX 2080 Ti 11GB - These are our top-end recommendation, and outperform a single Titan RTX for about the same price. Still, a pair of these is pretty expensive - so one of the other options below will also work quite well.
2 x GeForce RTX 2080 8GB - The RTX 2080 is about 5% slower than the 2080 Ti in RealityCapture, but still quite fast.
2 x GeForce RTX 2070 8GB - The difference between the RTX 2070 and 2080 is even smaller than that between the 2080 and 2080 Ti, and it doesn't lose any VRAM either (though that seems to have little, if any, impact on performance in this application).
1 or 2 x GeForce RTX 2060 6GB - One of the RTX 2060 cards is about as low as we would recommend going, since the drop-off between this and the next model (GTX 1660 Ti) is more substantial and the price savings is less than $100. Two of them is also a solid option, offering performance on par with a single RTX 2080 for less money.

3a. Machine Learning AI / TensorFlow => Recommended hardware: 2 or 4 RTX 2080Ti, RTX 2070, or Titan V GPU's

3b. Machine Learning Development => Recommended hardware: 1. RTX 2080 Ti or Nvidia Quadro RTX 6000 or 8000.

NOTE:
Recommendations were made by PugetSystems.
>>>>>>>>>>>>>>>>>>>>>>>>>

I hope you can see the user case for various types of cards, whether it's gaming GPUs, workstation GPUs or data center GPUs.
A good percentage of those applications benefit from having the best gaming GPU like the 2080 Ti.


Another unique application is:
- GPU accelerated streaming:
Modern graphics processing units (GPUs) have dedicated video encoding / decoding hardware built-in, which is usually not operating when games are being played. NVIDIA has taken advantage of this to provide hardware-accelerated game capture - either recording or streaming, but not both at the same time. They call this technology NVIDIA ShadowPlay, and it is available for free on GeForce series video cards as part of the GeForce Experience software package. ShadowPlay is easy to set up but the features and configuration options are more limited than robust software programs like OBS. Including a webcam feed or microphone input is easily done, but that is about all: no options for overlays, streaming is limited to Twitch only, etc. It is a great way to get started with streaming, though, since it doesn't cost anything (at least for gamers using GeForce cards already) and has almost no impact on CPU usage or game performance.

How much money is there in professional gamers who stream? And how important is the quality of that live stream? You get the idea.
>>>>>>>>>>>>>>>>>>>>>>>>>

Oh, I didn't even mention the workplace upgrade cycle, where workstation machines get upgraded based on their life in service. So when it's time for a replacement, a certain performance/dollar is chosen for a new workstation computer with a GPU.

The newly announced Nvidia GeForce 30 series (3070, 3080, 3090) will be the #1 choice for a variety of applications once it hits the market.

In the business world, time is money, and if you're waiting for too long for your GPU, your competition is outpacing you.

Best!
 

GeorgeBynum

Active Member
Joined
Feb 14, 2020
Messages
121
Likes
103
Location
Greenville SC USA
Gaming. You can never have enough performance in the graphics card so you buy as much as you can afford if you are a gamer.

If you aren't into gaming, you will never understand. It is an alien parallel world. :)
And if you consider the new MS Flight Simulator a game ... Flight controls for your desktop are as rare as TP (loo roll for you Brits) was 3 months ago.
 

Astrozombie

Senior Member
Forum Donor
Joined
May 7, 2020
Messages
393
Likes
147
Location
Los Angeles
I used to think these cards were too expensive for a while too, but then I compared the currency exchange with how much cards cost back in the 90s/00s and it's actually pretty close to the same.
 

renaudrenaud

Major Contributor
Joined
Apr 20, 2019
Messages
1,310
Likes
2,874
Location
Tianjin
Ok, so you can order the card you want, even if it is a really power hungry one, at the moment you use the next item from AMD:
2020-09-06-image-5.jpg
 

Vasr

Major Contributor
Joined
Jun 27, 2020
Messages
1,409
Likes
1,926
I used to think these cards were too expensive for a while too, but then I compared the currency exchange with how much cards cost back in the 90s/00s and it's actually pretty close to the same.

Compare them for performance/dollar pre-bit-mining craze.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,192
Location
Riverview FL
Compare them for performance/dollar pre-bit-mining craze.

I haven't looked at the hash rate for a while...

Yikes!

130 million tera...

130,000,000 x 1,000,000,000,000 = 130,000,000,000,000,000,000 per second...
1599507853466.png
 

beefkabob

Major Contributor
Forum Donor
Joined
Apr 18, 2019
Messages
1,658
Likes
2,114
I'm thinking 3070 or 3080 to go 120hz 4k hdr video and gsync. Really i just want hdmi 2.1 now. I'm not playing any aaa games, but my son would love to.
 
Top Bottom