ZW3D+CUDA

Rank: 1

rus

Newbie

posts: 11

Registered: 2011-7-14

Message 1 of 5

 ZW3D+CUDA
06-08-2012 05:14 . pm | View his/her posts only
NVIDIA CUDA Will the technology of CUDA supported in ZW3D?

Rank: 1

Factorytuned

Newbie

posts: 44

Registered: 2011-12-29

Message 2 of 5

06-08-2012 10:21 . pm | View his/her posts only
I don't see the likelyhood of that personally. That would require special set of graphics libraries. Meaning that multiple sets of graphics libs., would be required for non cuda video boards. BUT it would be trick, nice and would decimate all in rendering...

I think what would be good for the upcoming 2013 release 64bit would be to start building the math libs to take advantage of the new VECTOR Math Core integrated into Intel's newest product line of CPU's. As that will can be an industry wide standard that everyone using intel cpu's can take advantage of. Best would be to support all feature of Video - CUDA and Vector Math Cores.

Right now we don't even have MultiCore in this application and still are struggling with memory issues. Although better..!

Jeff
AKA - FT/Factorytuned

Rank: 9Rank: 9Rank: 9

Colin

Manager

posts: 138

Registered: 2012-1-19

Message 3 of 5

10-08-2012 09:35 . am | View his/her posts only
Hi, All

NV CUDA is the good stuff for graphic enhancement, it is one of the choices. ZW3D is using OpenGL, more advanced APIs are also available to boost the performance. ZW3D has great potential to use OpenGL more in V17.

CUDA could be the choice for the future reference, we will choose the best technology for graphic performance, and make the product to be easier to handle huge graphic calculations.

64 bit will be done first in V17, which could handle huge assembly by using more memory.

Best regards
Colin

Rank: 7Rank: 7Rank: 7

Paul

Moderator

posts: 314

Registered: 2011-9-17

Message 4 of 5

20-09-2012 05:02 . am | View his/her posts only
I was privileged to see a presentation by a computer assembler summarising their research over the last tow years into CUDA and other heterogeneous computing technologies.
Their opinion is that CUDA is the technology that will prevail due ot market forces and nVidia being in the game for the longest. Already much software is CUDA enabled and benefiting.
With Tesla matched with a suitable graphics card (GForce is OK) incredible performance is possible.
I think the potential in CAM is where the effort is worth pursuing.
I suspect CUDA 32bit would out strip non CUDA 64 by along way.
Waiting to see if which technology is dominant will only leave ZW in the back ground.
These guys have generic componet PC (i7) with Tesla and a Gforce card achieving 1xTeraFLOPS single precision. It took them while to get it sorted but the results are impressive and the power consumption at desktop acceptable levels.
IMO CUDA is something ZW should be actively persuing now.

Rank: 1

Factorytuned

Newbie

posts: 44

Registered: 2011-12-29

Message 5 of 5

23-09-2012 12:34 . am | View his/her posts only
Post Last Edit by Factorytuned at 2012-9-30 02:22

I have to stick to my guns on this one. I do agree that CUDA parallel computing is incredible if you have the app to take advantage. However, I do not necessarily agree that Mechanical CADCAM is in such need. BOO HISS BOO... CUDA is positioned for scientific analysis, computation, post process video and animation kinmatics(i.e., PhysX interface). Offloading the floating point math to a greater degree to the GPUs. When you're talking TeraFLOPS of precision that has little relevance to Mechanical CADCAM, UNLESS you are working with CFD or Stress Analysis or, and applicable here, somewhat, Photo Realistic Rendering. Also, the TESLA processors are going away, The Kepler processors are much more advanced, much smaller die, two levels of cache and, in the case of the K5000 1500 CUDA cores and Single Precision FLOPS at 2.1TF on a single X16 buss slot - two boards wide..

What week seek as users is fast solid graphic libraries with stable and uniform operation. This would all be semantics if the graphics were not ALWAYS an issue.

Intels current E5(2600 Series) class Xeons, offer Vector Cores. This could be one reason for contraversy over GPU vs CPU. In my opinion, the CPU should use the video processor, as a Co-Processor tightly coupled on layer one code or firmware(bios). Vector math is much more capable and of lower overhead to generate G and T-FLOP performance.

Case-in-point the first super computers, CRAYS 1 through 4, S.R.Cray developed units, were simple(huh!) 3D Vector - Cross Bar machines.

All we are concerned about is fill rates. This is what we(humans) perceive as fast video. CUDA seeks to provide computation and video in a single platform. I've tested several of nVidias' highest end video boards on several different Mechanical CADCAM systems and at a point, the peformance simply does not get any better no matter how complex or costly or how much ram or CUDA cores, it simply does not get better, because the output to screen does not change. What does change is how the image is rendered or looks or is perceived through complex shadder algorithums and 64bit floating-point precision math.

This is the reason I started the Video thread, to findout what others are using and the different performance levels and erratum. I only support MS, Xeons, Intel Cert platforms, and nVidia GPUs.

But no one has presented any information which is curious. I will be testing however, an IBM Server PlatformX3500-M4 E52600. This server is tested and directly supports two K5000 or 6000 GPUs on E5-2680 CPU pair - Express buss three X16 by two. Each CPU handling a single GPU on its' dedicated buss.

So much power! But I'll bet ZW3D and the other Mechcanical CADCAM software systems I test will only perform marginally over my current platform. Intel5600-5520 series x5677 Q4000 platform.

Jeff
AKA-FT/Factorytuned
See also