Moderator: JC Denton
by icycalm » 05 Jan 2010 21:51
by Adjudicator » 21 Feb 2010 03:56
Which NVIDIA GeForce GPUs support PhysX?
GeForce 8-series GPUs and later (with a minimum of 32 cores and a minimum of 256MB dedicated graphics memory) support PhysX.
by icycalm » 01 Oct 2012 15:32
Vlad Savov wrote:And yet, the stalwart PC continues to defy exaggerated reports of its demise with hugely popular series like Battlefield, Crysis, Diablo, StarCraft, and Total War offering gaming experiences that simply cannot be touched by consoles.
Vlad Savov wrote:Triple-monitor gaming is simply amazing. A sensory revelation. You still spend the majority of the time with your eyes anchored on the middle screen, but the sense of atmosphere that comes from the two auxiliary displays is spine-tinglingly good. After a while, you may even learn to look at things with your own neck rather than the mouse. In truth, this is how any sort of visual simulation is supposed to operate. The tight horizontal field of view of a typical monitor, no matter how resplendent its color reproduction may be, is just unnatural — human beings have peripheral vision which has gone (mostly) neglected, and it’s only once you move to this sort of surround view that you’ll understand what you’ve been missing.
Vlad Savov wrote:CAVEAT EMPTOR
The new generation of multi-monitor-gaming-on-a-single-card GPUs is indeed impressive, but if you want to make that $499 (and above) leap, bear in mind that you'll have to upgrade more than just the card itself. You’ll likely need new monitors and don't assume you'll be able to get away with the same old bureau, either. Multi-display gaming requires a space closer to a dining table than a desk. Monitors need to be identical in terms of screen size and resolution — ideally identical, full stop, so that bezel gaps and position will match — and have the thinnest possible bezels to maximize immersiveness.
by icycalm » 22 Jan 2013 20:39
Mike S. wrote:NVIDIA's current flagship graphics card, the GTX 680, is based on the Kepler GPU architecture. However, being the GK104 variant, it's not the top model, or "Big Kepler" as it's been dubbed. That chip has already found its way into the super-expensive K20 Tesla HPC card late last year, with a huge 18700 of those cards finding their way into the Cray Titan supercomputer! However, so far, no GeForce card based on this GPU has been announced and crucially, no announcement from NVIDIA at CES 2013 earlier this month where they unveiled their Project SHIELD portable gaming platform running on Android.
Now, rumors (have pinch of salt at the ready) have leaked out that a variant of this chip is to finally be at the heart of a mega-powerful single-chip graphics card. This will be the GK110 GPU, but unfortunately not the full version, since it will only have 14 SMX units, rather than the full 15 present in the K20 card. This means that it will have 2688 Cuda cores rather than the 2880 of the uncut K20 variant. The GPU will be clocked at a conservative 732MHz and it will feature a massive 6GB of GDDR5 memory with an effective memory clock of 5200MHz attached via a 384-bit bus. Performance is reported to be around 85% that of a GTX 690 and the card will sell for about $900. Finally the name is expected to be called the "GeForce Titan", fitting in nicely with the name of Cray's supercomputer.
by icycalm » 22 Feb 2013 16:27
by icycalm » 02 Mar 2013 16:34
by icycalm » 02 Mar 2013 21:22
by movie » 24 Apr 2013 01:28
It has been reported by Brightsideofnews that NVIDIA would be launching their GeForce 700 series including the flagship GeForce GTX 780 as soon as May 2013. According to the information from their Asia based sources, the company is planning to launch three new GeForce 700 series SKUs which will be featuring refined version of the 28nm Kepler architecture.
The most interesting part about the report is that we have already been hearing about the GeForce GTX Titan LE for the past few months. The card which is a consumer version of the already released Tesla K20C would not be another TItan GPU but rather the next flagship GeForce GTX 780 graphics card featuring the same specifications. The GeForce GTX 780 would be powered by the GK110 core boasting 2496 Cuda cores and a 5 GB GDDR5 VRAM which was indicated in a leaked picture of the board itself.
The leak could now be considered true since it was mentioned that Titan LE was just an internal codename of the card, retail name would be something else and with a different cooling scheme compared to the Titan. It was speculated that the Titan LE, now known as the GTX 780 would be priced around the $499 – $599 mark.
The other two boards, GeForce GTX 770 and GeForce GTX 760 Ti would be based on the GK104-425 featuring higher clock speeds and much better memory interfaces. The GeForce GTX 770 for instances is said to feature 4GB GDDR5 memory interface, 1536 cores (same as a GTX 680) and around 20-25% better performance than the GeForce GTX 670. The GeForce 760 Ti on the other hand would also be based on the GK104-425 core architecture and would get a 2 GB VRAM with a 256-bit interface and higher clock speeds. The card would be around 20-23% faster than the GeForce GTX 660 Ti and would give a hard time to the Radeon HD 7800 series GPUs.
by icycalm » 31 May 2013 02:42
by icycalm » 26 Sep 2013 22:33
Scott Michaud wrote:Not a lot is known about the top end, R9 290X, except that it will be the first gaming GPU to cross 5 TeraFLOPs of compute performance. To put that into comparison, the GeForce Titan has a theoretical maximum of 4.5 TeraFLOPs.
Scott Michaud wrote:The "Hawaii" powered Radeon R9 290 and R9 290X graphics cards are expected to handle CrossFire pacing acceptably at launch. Clearly, if there is ever a time to fix the problem, it would be in new hardware. Still, this is good news for interested customers; if all goes to plan, you are likely going to have a good experience out of the box.
Current owners of GCN-based video cards, along with potential buyers of the R9 280X and lower upcoming cards, will apparently need to wait for AMD to release a driver to fix these issues. However, this driver is not far off: Koduri, unclear whether on or off the record, intends for an autumn release. This driver is expected to cover frame pacing issues for CrossFire, Eyefinity, and 4K.
Koduri does believe the CrossFire issues were unfortunate and expresses a desire to fix the issue for his customers.
by El Chaos » 29 Mar 2014 22:08
by icycalm » 29 Mar 2014 23:48
by icycalm » 30 Mar 2014 00:06
by icycalm » 03 Apr 2014 17:09
Tim Verry wrote:the GTX TITAN Z is a triple slot graphics card
by El Chaos » 05 Feb 2015 23:20
by El Chaos » 26 Jun 2015 23:55
Richard Leadbetter wrote:The case for the Radeon R9 390X is crystal clear: it's significantly cheaper than the GTX 980, it has twice the amount of on-board GDDR5, and following the upclock and RAM revision, it is comparable from a performance perspective - as long as you're not a 1080p gamer, where the GTX 980 clearly remains the best choice. However, if your display operates at 1440p or an even higher resolution, the performance credentials of the card bring it into line with the competition, surpassing the GTX 980 in several of our test games.
The case against is equally clear: there's little overclocking headroom left, and the 390X's power consumption is quite staggering. But let's be clear: a power-hungry card doesn't necessarily mean it's a nuisance from a noise perspective: in fact, MSI's cooler here is quite remarkable in terms of both its lack of noise and thermal dissipation. However, the fact remains that a lot of heat is being pumped into your case and you'll need to ensure that your chassis is capable of expelling it effectively - and the room will definitely get noticeably warmer as a consequence! Additionally, we find the 390X hard to recommend for multi-GPU scenarios, whereas Nvidia's GTX 970 and 980 both perform well here thanks to their enviable power efficiency, plus better driver support.
Overall, there's perhaps a sense that AMD has dragged the 290X kicking and screaming into a performance envelope comparable with GTX 980 but in the final analysis, there's no denying that AMD has achieved what it set out to do: R9 390X offers a gameplay experience similar to its Nvidia counterpart, though the decent overclocking of the GTX 980 may well give the green team final bragging rights.
But it's the practical value of the R9 390X's 8GB of GDDR5 that remains something of a mystery. Actual use-case scenarios beyond 4GB are limited right now, and the decision to kit out 390X with twice the memory of the new Fury X flagship is curious to say the least. But what's clear is that games' appetite for VRAM is only going to move in one direction, and as developers have been telling us for years now, the more GPU memory you have, the better - and in R9 390X, it's kind of reassuring just to know that it's there.
Richard Leadbetter wrote:If the performance numbers between the products are within a few percentage points, potential buyers may well find themselves looking beyond the numbers in choosing which GPU to purchase - do they go for the higher level of VRAM with Nvidia and its lower power consumption, or do they opt for the smaller form factor and quiet liquid cooling of the Radeon Fury? One thing's for sure, there's a fascinating face-off between the green and red teams coming up, and the fact that there absolutely is competition at the top level of GPU performance can only be a good thing for the market. We'll be reviewing the Radeon Fury X as soon as we can.
by El Chaos » 27 Jun 2015 23:10
Richard Leadbetter wrote:There have been some excellent deep dives on the make-up of the Fiji processor at the heart of the Radeon R9 Fury X - one of the best we've seen comes from The Tech Report, breaking down performance from individual components of the GPU and analysing their strengths and weaknesses. It shows that while a great many elements of the chip are best in class, others show little improvement from R9 290X - which may explain some of the results seen at lower resolutions. But the overall takeaway we have is that the Fiji hardware is a strong technological achievement - and as seen in the 4K results, where GPU capabilities are the absolute focus of attention, Fury X is competitive - it's a serious rival, and that's exactly what the market needs. The fact that AMD's new card is a good deal cheaper than current GTX 980 Ti prices (in the UK at least) is also a factor worth taking into account.
On the flip side, if you're pushing for balance between frame-rates and visual refinement, even with this new wave of uber-cards, you need to drop down from 4K to a more reasonable resolution - and the further down you drop, the more dominant GTX 980 Ti becomes, especially so when overclocked. But on a more general level, we do have to wonder whether 4K may well be something of a blind alley for PC gaming, bearing in mind that the physical size of 4K PC displays is remaining static (which is why many PC gamers are considering large 4K UHD TVs as monitor replacements). Right now, our gut feeling remains that the new wave of 34-inch 3440x1440 monitors with the super-wide 21:9 aspect ratio may well be the more natural home for the likes of the Fury X and the GTX 980 Ti - there's just 60 per cent of the resolution to drive, and there's a generally more immersive experience owing to the expanded field of view. LG, Asus and others are also incorporating FreeSync and G-Sync technologies into these panels too, making them even more compelling - we hope to review one of these displays soon.
But that's a discussion for another time. In the here and now, it's fair to say that AMD has produced an innovative, powerful piece of hardware - and if 4K gaming is your bag, it's a very serious contender. But there's the sense that the complete package hasn't quite been delivered - the hardware is there, but perhaps the accompanying software has fallen short. We already know that DX12 and Vulkan will solve AMD's API overhead issues (which may have something to do with the sub-par 1080p results), but that's going to take a while to proliferate into the games we actually play. With HBM, we are already seeing sleek, smaller form factors, but right now there's no knock-out blow in terms of gaming performance vs old-school GDDR5 VRAM. And in terms of flexibility - whether we're talking about running at sub-4K resolutions, or even support for HDMI 2.0, we're surprised to see AMD concede ground to Nvidia in any area on what is a flagship product.
In conclusion, a straight comparison of these high-end GPUs is fascinating. AMD and Nvidia have invested in innovation in completely different areas, the red team banking on the remarkable HBM, its counterpart relying more heavily on the power efficiency and performance of the second generation Maxwell architecture (and leaving its own version of HBM in reserve until next year's hardware). At 4K, these entirely different approaches have ended up yielding remarkably similar performance with accomplished levels of efficiency - but what we really need from both sides is brand new core technology, combined with smaller 16nm FinFET manufacturing, due next year. Factor in HBM and DX12 and within a couple of years, the next-gen Fury and its Titan equivalent should leave both of these current cards in the dust. In the here and now, AMD may not have bested Nvidia - but as we move away from the 28nm era, it's clear that the appetite and the technology are there for us to see some serious competition between the two.
by El Chaos » 05 Jan 2016 18:32
Richard Leadbetter wrote:Best graphics card under £100 / $130: GeForce GTX 750 Ti (Honourable mention: Radeon R7 360)
Under £130 / $160: GeForce GTX 950 (Honourable mention: Radeon R7 370)
Under £200 / $250: Radeon R9 380/380X (Honourable mention: GeForce GTX 960)
Under £250 / $330: GeForce GTX 970 (Honourable mention: Radeon R9 390)
Under £400 / $500: GeForce GTX 980 (Honourable mention: Radeon R9 390X)
Over £400/$500: GeForce GTX 980 Ti (Honourable mention: Radeon R9 Nano)
by El Chaos » 19 May 2016 21:43
by El Chaos » 05 Jul 2016 23:26
by El Chaos » 03 Sep 2016 14:32
by icycalm » 27 Jul 2018 10:04
IbizaPocholo wrote:
- GeForce GTX 1180: 30/08
- GeForce GTX 1170 y 1180+: 30/09
- GeForce GTX 1160: 30/10
by icycalm » 10 Aug 2018 17:26
gspat wrote:New cards:
Titan RTX
Geforce RTX 2080
Geforce RTX 2070
Geforce GTX 2060
Geforce GTX 2050
Emphasis on ray tracing in higher end cards.
by icycalm » 20 Aug 2018 17:28
BossâMoogle wrote:RTX 2080 Ti FP32 performance is expected to be somewhere around 13 Tflops.