nvidia Tesla K20 Computational accelerator card

DMC

DMC

Banned
Messages
6,378
Reaction score
260
Anyone know about this technology?

I ordered One to see what the hype is all about.

I will install in "regular" workstation PC along with a GTX690 GPU card.

I have heard that the average application runs 10x faster!

Most processing will be done by the Tesla card, and NOT the PCs main processors on the motherboard.

Maybe Intel will loose it's hold on the market? Looks like a Graphics card, but has no display ports to hook up a Monitor.



a2.bp.blogspot.com__I2BrUNvEL9s_UKO5gBicz2I_AAAAAAAAGBA_T21zPwea0949682dd5f2fcd01ed4bda4ac1ff4.jpg


aregmedia.co.uk_2012_11_11_nvidia_tesla_k20_vs_xeon_fermi.jpg



aregmedia.co.uk_2012_11_11_nvidia_tesla_k20_vs_xeon_apps.jpg
a2.bp.blogspot.com__I2BrUNvEL9s_UKO5gBicz2I_AAAAAAAAGBA_T21zPwea0949682dd5f2fcd01ed4bda4ac1ff4.jpg aregmedia.co.uk_2012_11_11_nvidia_tesla_k20_vs_xeon_fermi.jpg aregmedia.co.uk_2012_11_11_nvidia_tesla_k20_vs_xeon_apps.jpg
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
NVIDIA Tesla K20 - Active - Safe Harbor 800-544-6599

This is the One I think I need? The "Active" model of the K20....specifically for Workstation PCs.

Anything else is designed for Servers. Active refers to the cooling. You would want the fan.

*Don't buy the Two on ebay. One is wrong application, and the other is same price as new (open box and no papers)

aimages.highspeedbackbone.net_SkuImages_gallery_large_N500_3010_vgallery03_plp_gl_7462665_20.jpg
aimages.highspeedbackbone.net_SkuImages_gallery_large_N500_3010_vgallery03_plp_gl_7462665_20.jpg
 
Last edited:
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
These were deaigned to be used in tandem with other gfx cards in sli configuration. Hope you like it!
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
I did not see that in any literature.

Did you see that somewhere on nvidia's web site, because I think I have read almost all there was to read and do not remember seeing this.

These are not Graphics cards, and of course you would need an actual graphics card. Like I mentioned...I am trying to get it to work with a GTX690.

As far as needing Two GTX690s....I have not seen anything to suggest this. A GTX690 is actually Two cards built into One jah know......It is Two 680s smushed together. So......again, I don't understand why you say I need SLI enabled. That is usually only for a single monitor running on Two graphics cards. (Which I am NOT doing)
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
We run Four monitors on the CAM station we are adding the Tesla to.

I believe it is impossible to enable SLI using Four monitors. No?

Even if you do have Two graphics cards.

SLI uses One card as a "Slave" and you cannot hook up a monitor to it during this type of config. Only the Master card.

A Single 4gb 690 card will not need a slave in SLI. It is the world most powerful GPU to date. It has Four outputs to monitors, and I do not plan on adding another GPU. Only the Tesla, and the 690

I have now pulled the trigger on $4000 worth of gizmos in an attempt to decrease the processing times for CAM calculations, file conversions, importing CAD files into CAM, and other operations. I will try to give honest results that we get. Comparing apples to apples. (before and after calc. times) ($1k for GTX690 and $3k for Tesla K20)

In a week or so I should have some results to post up.

This is what we have to replace..... (GTX 550Ti and a GTX 460 not in SLI because again, we run Four monitors and use all outputs on both cards. One can't be a slave.)

ai930.photobucket.com_albums_ad145_turbo2nr_currentset_up_zps036d7fb4.jpg
ai930.photobucket.com_albums_ad145_turbo2nr_currentset_up_zps036d7fb4.jpg
 
Last edited:
DMC

DMC

Banned
Messages
6,378
Reaction score
260
GeForce GTX 690 | GeForce


GTX 690 is something I have not tried yet, but I have One on order.....

It is Two nvidia Kepler GK104 GPUs in One single card! It has over 3000 cuda processors!

Whatever Graphics card you have.....it will bow down to this new card, and then some.

The days of SLI are over, as far as I am concerned.

7 Billion transistors!

One single GTX 690 = Two GTX 680 in SLI ......so no SLI is needed IMO.

Anyone else have some data or opinions??

"Based on a pair of Kepler GK104 GPUs, the GeForce GTX 690 would be NVIDIA’s new flagship dual-GPU video card. And by all metrics it would be a doozy. Packing a pair of high clocked, fully enabled GK104 GPUs, NVIDIA was targeting GTX 680 SLI performance in a single card, the kind of dual-GPU card we haven’t seen in quite some time. GTX 690 would be a no compromise card – quieter and less power hungry than GTX 680 SLI, as fast as GTX 680 in single-GPU performance, and as fast as GTX 680 SLI in multi-GPU performance. And at $999 it would be the most expensive GeForce card yet.
After the announcement and based on the specs it was clear that GTX 690 had the potential, but could NVIDIA really pull this off? They could, and they did. Now let’s see how they did it.
GTX 690GTX 680GTX 590GTX 580
Stream Processors 2 x 1536 1536 2 x 512 512
Texture Units 2 x 128 128 2 x 64 64
ROPs 2 x 32 32 2 x 48 48
Core Clock 915MHz 1006MHz 607MHz 772MHz
Shader Clock N/A N/A 1214MHz 1544MHz
Boost Clock 1019MHz 1058MHz N/A N/A
Memory Clock 6.008GHz GDDR5 6.008GHz GDDR5 3.414GHz GDDR5 4.008GHz GDDR5
Memory Bus Width 2 x 256-bit 256-bit 2 x 384-bit 384-bit
VRAM 2 x 2GB 2GB 2 x 1.5GB 1.5GB
FP64 1/24 FP32 1/24 FP32 1/8 FP32 1/8 FP32
TDP 300W 195W 375W 244W
Transistor Count 2 x 3.5B 3.5B 2 x 3B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 40nm TSMC 40nm
Launch Price $999 $499 $699 $499
As we mentioned earlier this week during the unveiling of the GTX 690, NVIDIA is outright targeting GTX 680 SLI performance here with the GTX 690, unlike what they did with the GTX 590 which was notably slower. As GK104 is a much smaller and less power hungry GPU than GF110 from the get-go, NVIDIA doesn’t have to do nearly as much binning in order to get suitable chips to keep their power consumption in check. The consequence of course is that much like GTX 680, GTX 690 will be a smaller step up than what NVIDIA has done in previous years (e.g. GTX 295 to GTX 590),as GK104’s smaller size means it isn’t the same kind of massive monster that GF110 was.
In any case, for GTX 690 we’re looking at a base clock of 915MHz, a boost clock of 1019MHz, and a memory clock of 6.006GHz. Compared to the GTX 680 this is 91% of the base clock, 96% of the boost clock, and the same memory bandwidth; this is the closest a dual-GPU NVIDIA card has ever been to its single-GPU counterpart, particularly when it comes to memory bandwidth. Furthermore GTX 690 uses fully enabled GPUs – every last CUDA core and every last ROP is active – so the difference between GTX 690 and GTX 680 is outright the clockspeed difference and nothing more.
aimages.anandtech.com_doci_5805_GeForce_GTX_690_3qtr_575px.jpg
Of course this does mean that NVIDIA had to make a clockspeed tradeoff here to get GTX 690 off the ground, but their ace in the hole is going to be GPU Boost, which significantly eats into the clockspeed difference. As we’ll see when we get to our look at performance, in spite of NVIDIA’s conservative base clock the performance difference is frequently closer to the smaller boost clock difference.
As another consequence of using the more petite GK104, NVIDIA’s power consumption has also come down for this product range. Whereas GTX 590 was a 365W TDP product and definitely used most of that power, GTX 690 in its stock configuration takes a step back to 300W. And even that is a worst case scenario, as NVIDIA’s power target for GPU boost of 263W means that power consumption under a number of games (basically anything that has boost headroom) is well below 300W. For the adventurous however the card is overbuilt to the same 365W specification as the GTX 590, which opens up some interesting overclocking opportunities that we’ll get into in a bit.
For these reasons the GTX 690 should (and does) reach performance nearly at parity with the GTX 680 SLI. For that reason NVIDIA has no reason to be shy about pricing and has shot for the moon. The GTX 680 is $499, a pair of GTX 680s in SLI would be $999, and since the GTX 690 is supposed to be a pair of GTX 680s, it too is $999. This makes the GTX 690 the single most expensive consumer video card in the modern era, surpassing even 2008’s GeForce 8800 Ultra. It’s incredibly expensive and that price is going to raise some considerable ire, but as we’ll see when we get to our look at performance NVIDIA has reasonable justification for it – at least if you consider $499 for the GTX 680 reasonable.
Because of its $999 price tag, the GTX 690 has little competition. Besides the GTX 680 in SLI, its only other practical competition is AMD’s Radeon HD 7970 in Crossfire, which at MSRP would be $40 cheaper at $959. We’ve already seen that GTX 680 has clear lead on the 7970, but thanks to differences in Crossfire/SLI scaling that logic will have a wrench thrown in it. But more on that later.
Finally, there’s the elephant in the room: availability. As it stands NVIDIA cannot keep the GTX 680 in stock in North America, and while the GTX 690 may be a very low volume part due to its price, it requires 2 binned GPUs, which are going to be even harder to get. NVIDIA has not disclosed the specific number of cards that will be available for the launch, but after factoring the fact that OEMs will be sharing in this stockpile it’s clear that the retail allocations are certainly going to be small. The best bet for potential buyers is to keep a very close eye on Newegg and other e-tailers, as like the GTX 680 it’s unlikely these cards will stay in stock for long."

more... AnandTech - NVIDIA GeForce GTX 690 Review: Ultra Expensive, Ultra Rare, Ultra Fast


Scott
aimages.anandtech.com_doci_5805_GeForce_GTX_690_3qtr_575px.jpg
 
Last edited:
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
It doesnt do what you want it to do. These cards were made to be slaves to boost the speed of your existing card. Boosting it by bloating the processing on the video card. Essentially making one card with the teala to be as capable as two cards in sli. It has the added advantage of multiple monitor support. You dont need to enable sli for it to work just connect the bridge and power coupling. Apart from that any crossfire system would be far more flexible and capable than these sli rigs. Not to mention stablility and ease of driver installation. But dont listen to me im just a shmoe that plays a lot of games. All i know. Never really explored these with regard to workstation functionality.
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
I think you are mistaken.

The Tesla does not get attached to a regular GPU at all. (With or without a physical bridge)

It handles/helps with regular processor tasks, not just Graphics. I think you need to read up on what you are talking about.

Titan, the World fastest Super-computer, uses this exact processor to whoop-ass. NOT in graphics, but in computational-processing brute power.

It has very little to do with boosting an existing graphics card. You are very much mistaken. But, don't listen to me....

Read this.....

What is GPU Computing?

GPU computing is the use of a GPU (graphics processing unit) together with a CPU to accelerate general-purpose scientific and engineering applications. Pioneered five years ago by NVIDIA, GPU computing has quickly become an industry standard, enjoyed by millions of users worldwide and adopted by virtually all computing vendors.

GPU computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run significantly faster.

CPU + GPU is a powerful combination because CPUs consist of a few cores optimized for serial processing, while GPUs consist of thousands of smaller, more efficient cores designed for parallel performance. Serial portions of the code run on the CPU while parallel portions run on the GPU.

Most customers can immediately enjoy the power of GPU computing by using any of the GPU-accelerated applications listed in our catalog, which highlights over one hundred, industry-leading applications. For developers, GPU computing offers a vast ecosystem of tools and libraries from major software vendors. Run your GPU-accelerated code
faster


Test Drive a Tesla K20 GPU Accelerator.
Learn More.


History of GPU Computing

Graphics chips started as fixed-function graphics processors but became increasingly programmable and computationally powerful, which led NVIDIA to introduce the first GPU. In the 1999-2000 timeframe, computer scientists and domain scientists from various fields started using GPUs to accelerate a range of scientific applications. This was the advent of the movement called GPGPU, or General-Purpose computation on GPU.

While users achieved unprecedented performance (over 100x compared to CPUs in some cases),the challenge was that GPGPU required the use of graphics programming APIs like OpenGL and Cg to program the GPU. This limited accessibility to the tremendous capability of GPUs for science.

NVIDIA recognized the potential of bringing this performance for the larger scientific community, invested in making the GPU fully programmable, and offered seamless experience for developers with familiar languages like C, C++, and Fortran.

GPU computing momentum is growing faster than ever before. Today, some of the fastest supercomputers in the world rely on GPUs to advance scientific discoveries; 600 universities around the world teach parallel computing with NVIDIA GPUs; and hundreds of thousands of developers are actively using GPUs.

All NVIDIA GPUs—GeForce®, Quadro®, and Tesla®— support GPU computing and the CUDA® parallel programming model. Developers have access to NVIDIA GPUs in virtually any platform of their choice, including the latest Apple MacBook Pro. However, we recommend Tesla GPUs for workloads where data reliability and overall performance are critical. For more details, please see “Why Choose Tesla.”

Tesla GPUs are designed from the ground-up to accelerate scientific and technical computing workloads. Based on innovative features in the “Kepler architecture,” the latest Tesla GPUs offer 3x more performance compared to the previous architecture, more than one teraflops of double-precision floating point while dramatically advancing programmability and efficiency. Kepler is the world’s fastest and most efficient high performance computing (HPC) architecture.

"GPUs have evolved to the point where many real-world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs.'
Professor Jack Dongarra
Director of the Innovative Computing Laboratory
The University of Tennessee




Another......

"
NVIDIA Unveils World's Fastest, Most Efficient Accelerators, Powers World's No. 1 Supercomputer

Monday, November 12, 2012
SC12 -- NVIDIA today unveiled the NVIDIA® Tesla® K20 family of GPU accelerators, the highest performance, most efficient accelerators ever built, and the technology powering Titan, the world's fastest supercomputer according to the TOP500 list released this morning at the SC12 supercomputing conference.
Armed with 18,688 NVIDIA Tesla K20X GPU accelerators, the Titan supercomputer at Oak Ridge National Laboratory in Oak Ridge, Tenn. seized the No. 1 supercomputer ranking in the world from Lawrence Livermore National Laboratory's Sequoia system with a performance record of 17.59 petaflops as measured by the LINPACK benchmark.[SUP](1)[/SUP]

Tesla K20 - Performance, Energy-Efficiency Leadership
Based on the revolutionary NVIDIA Kepler™ compute architecture, the new Tesla K20 family features the Tesla K20X accelerator, the flagship of NVIDIA's Tesla accelerated computing product line.
Providing the highest computing performance ever available in a single processor, the K20X provides tenfold application acceleration when paired with leading CPUs.[SUP](2)[/SUP] It surpasses all other processors on two common measures of computational performance -- 3.95 teraflops single-precision and 1.31 teraflops double-precision peak floating point performance.
The new family also includes the Tesla K20 accelerator, which provides 3.52 teraflops of single-precision and 1.17 teraflops of double-precision peak performance. Tesla K20X and K20 GPU accelerators representing more than 30 petaflops of performance have already been delivered in the last 30 days. This is equivalent to the computational performance of last year's 10 fastest supercomputers combined."

more...... NVIDIA Newsroom - Releases - NVIDIA Unveils World's Fastest, Most Efficient Accelerators, Powers World's No. 1 Supercomputer - NVIDIA Newsroom
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
i still don't think its going to help you much, if the software isn't optimized to utilize these new hardware cores. don't forget, mathematicians use supercomputers to do math, which is software that breaches all parts of a computer system. few other software can do this.

unless you were planning to make the servos of your haas run code straight from the core computer and not the onboard plc then i see no advantage to you to have this running your computer.

and you were right, this tesla device doesnt bridge physically. i will have to go back and reread all those posts.
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
I know what I am buying. It is suppose to do exactly what I want it to do.

Speed up my processing times of all applications in a tremendous way. Not related to Graphics at all.

Everything is just math in a PC. I dunno what you are talking about. ???

Anyways...I will post before and after results of calculation times for 5-axis, and a few other benchmarks we can think of to test.

I expect glorious results. LOL
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
no, i was right, the tesla does bridge and is capable of SLI though it is not required. bridging with SLI would provide you with as i said bloated GPC performance, bottlenecked by the usual suspects of a computer - RAM, and hard drive.

instead these tesla cards have integrated into them mini p3 computer processors, freeing CPU bandwidth to devote to running your kernel functions.

so unless you are running Linux (for which NVidia has no public driver support but t does for special systems like defense departments and research projects) you basically just have a very good video card, and not much else.

remember: supercomputers like Titan do not use windows. if it did it wouldnt be a supercomputer. Cray as a system isnt one easily optimizable for traditional hardware, suited for tasks a normal Linux enthusiast would undertake. it was designed to be one where the core hardware ran base kernel functions, and where additional hardware ran the computational array. it was essentially designed for a supercomputer, and these arent systems you cook up and boot windows on.

bottom line, you really are just spending a ton of money on a faster video card.
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
Titan has 18,688 nodes (4 nodes per blade, 24 blades per cabinet),each containing a 16-core AMD Opteron 6274 CPU with 32 GB of DDR3 ECC memory and an Nvidia Tesla K20X GPU with 6 GB GDDR5 ECC memory.

essentially Titan is using 18,688 tesla cards.

no way are you benchmarking even close to "supercomputer" performance from the purchase of a single card.
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
Windows Applications run way faster. Faster than any Intel only powered PC. Much faster than SandyBridge performance. Many intelligent people call this a step into the world of super-computing. (Articles, magazines, etc...)

Up to 10x or more. (ex. MATLab, which all 3M scanner PCs have)

The card I bought is for Windows man!

Why are you so hard-headed? I don't care about Linux, or Video Games.

It is not intended to be connected to a Graphics card.

Where did I say I am building a super-computer? (but I am actually)

I am going to test before and after this change.

SLI is for Graphics cards. This is NOT a graphics card. You do not link it to a graphics card at all. You can link it to another Tesla card, but not at all like using SLI with Graphics cards. There is not even a place to physically hook up a bridge to the card! Nowhere does SLI get mentioned in any documents that I have seen. http://www.nvidia.com/object/personal-supercomputing.html

Thanks for your detailed info. LOL

Don't buy One if you don't feel it is for you.

Most Tesla cards are made for servers, but the K20 is specifically made for workstation PCs running Windows! Applications are much much faster. Very simple.

Cheers!
 
Last edited:
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
linux 64 bit generic display driver. not a specific driver built for tesla, just nvidia architecture, this driver is generic to all the products Nvidia makes. this is advertised as possibly making your older model card unstable.

far and away not the same optimization as titan, seeing how most versions of Linux come pre-packaged with drivers or are homebrewed (as in titans case)

noob
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
no, i was right, the tesla does bridge and is capable of SLI though it is not required. bridging with SLI would provide you with as i said bloated GPC performance, bottlenecked by the usual suspects of a computer - RAM, and hard drive.

instead these tesla cards have integrated into them mini p3 computer processors, freeing CPU bandwidth to devote to running your kernel functions.

so unless you are running Linux (for which NVidia has no public driver support but t does for special systems like defense departments and research projects) you basically just have a very good video card, and not much else.

remember: supercomputers like Titan do not use windows. if it did it wouldnt be a supercomputer. Cray as a system isnt one easily optimizable for traditional hardware, suited for tasks a normal Linux enthusiast would undertake. it was designed to be one where the core hardware ran base kernel functions, and where additional hardware ran the computational array. it was essentially designed for a supercomputer, and these arent systems you cook up and boot windows on.

bottom line, you really are just spending a ton of money on a faster video card.

No Dude.....There is no SLI involved in running a single, or multiple Tesla cards, and no option for this either. I have no idea where you think you found that info. They are for calculations normally run on your CPU. Works just fine with Windows, and works with most all applications. The newer K20 has many changes to allow this to happen. Unless you can link to your source of info....I must say that you are full of it, and do not understand what you are talking about at all.

If this thread makes you too upset for whatever reason, then I suggest taking a break and go play some more video games.

The Telsa card is NOT for playing video games. I never said this, and am not buying it for that reason. Also don't give a rat about Linux.
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
do you know how ridiculous you sound ?

you can't build a supercomputer by simply adding a single piece of hardware that promised to give it to you.

do you know how it delivers on its promises of providing "cluster level performance" ? by throwing more cores into the machine to negotiate and streamline tasks. not increase processing power.

as most, your windoze confuser will run all 64bit streams faster (in the sense that the threads are streamlined),and all 32 bit streams with no noticeable improvement.
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
you are buying the sales pitch, man. falling for the graphs. drinking the koolaid.
 
DMC

DMC

Banned
Messages
6,378
Reaction score
260
Untill you say something that is factual, you are just wasting your time trying to have a conversation with me.

It appears to me you are very cornfused about this technology. I don't even know what your point(s) is, other than telling me it doesn't work.

Do you have some source that tells you this? Or this is just your guess? Show me where you get your info from.

Yeah, nvidia is a worthless company that is full of lies. LOL

They have nothing to brag about I guess. Go back to work and ignore this thread. It's just all smoke and mirrors.
 
CoolHandLuke

CoolHandLuke

Idiot
Full Member
Messages
10,078
Solutions
1
Reaction score
1,411
oh it works, don't get me wrong.

but nvidia is, and has always been, full of lies.

i look forward to reading how fast your cam calculations are.
 

Similar threads

Top Bottom