Active questions tagged gpgpu - Super User most recent 30 from superuser.com 2019-04-22T08:29:10Z http://asianhospital.com/?id=feeds/tag?tagnames=gpgpu http://www.creativecommons.org/licenses/by-sa/3.0/rdf http://asianhospital.com/?id=q/1398591 1 Set Fanspeed of NVIDIA GPU on headless Ubuntu server at boot? memorableUserNameHere http://asianhospital.com/?id=users/412018 2019-01-25T22:16:42Z 2019-03-06T09:11:09Z <p>We use a headless system (a monitor-less CyberPC gaming machine) in my high performance computing research lab running Ubuntu 16.04 (kernel 4.4.0-141 generic) containing a NVIDIA GTX 1080 (as a GPGPU) on v384.130 drivers. I often notice that the fans are either not running, or only run intermittently.</p> <p>I don't like that, for a number of reasons.</p> <p>I'm aware of how to set the fanspeed using the NVIDIA XServer settings with coolbits=4 (see <a href="https://askubuntu.com/a/299648/518756">"How can I change the nvidia GPU fan speed?" on AskUbuntu</a>). However, this only seems works when a user is logged into an XServer instance on the machine.</p> <p>How can I set the fanspeed (or at least a minimum speed) at boot?</p> http://asianhospital.com/?id=q/1405922 0 How to install cuda? beginner http://asianhospital.com/?id=users/998547 2019-02-15T02:11:50Z 2019-02-15T02:11:50Z <p>My purpose is to use deep learning(theano,keras) by gpu. My gpu is below and os is CentOS Linux release 7.6.1810 . I want to install cuda (≧7.0), but cuda need nvidia-version 346.46.(<a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components" rel="nofollow noreferrer">https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components</a>).</p> <p>So, I tried to install nvidia version 346.96, which fits tesla C2070/C2050 (My gpu). But, I cannot. I got long errors, in which 'ERROR: Unable to build the NVIDIA kernel module.' is written in end line.</p> <p>How to install cuda (≧7.0)? If you need some information, please teach me. </p> <pre><code>nvidia-smi Fri Feb 15 10:45:02 2019 +------------------------------------------------------+ | NVIDIA-SMI 340.107 Driver Version: 340.107 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla C2050 / C... Off | 0000:0F:00.0 Off | 0 | | 30% 57C P0 N/A / N/A | 6MiB / 2687MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Quadro FX 380 Off | 0000:28:00.0 N/A | N/A | | 40% 75C P0 N/A / N/A | 3MiB / 255MiB | N/A Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla C2050 / C... Off | 0000:42:00.0 Off | 0 | | 30% 52C P0 N/A / N/A | 6MiB / 2687MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | 1 Not Supported | </code></pre> http://asianhospital.com/?id=q/962565 1 are SLI bridges necessary for general purpose GPU computing? tinlyx http://asianhospital.com/?id=users/379553 2015-08-25T15:44:23Z 2018-12-28T12:04:26Z <p>I looking at computer configuration with an option for adding a SLI bridge. The intended use of the GPUs are for general purpose/scientific computing, and not games.</p> <p>My question is, will having a SLI bridge be helpful in improving performance?</p> <p>According to <a href="https://en.wikipedia.org/wiki/Scalable_Link_Interface" rel="nofollow noreferrer">wikipedia</a>, SLI bridge seems to help solve bandwidth issues associated with rendering frames. I am just wondering if that also applies to generous purpose computing. </p> <p>-- EDIT --</p> <p>I haven't used SLI before. So one additional question is, is a separate SLI bridge necessary at all to use SLI or does it come with motherboards that support SLI?</p> <p>I have this question because when configuring computers online, I saw options for SLI bridges, e.g. <a href="http://www.cyberpowerpc.com/system/Fang_III_-_Viper" rel="nofollow noreferrer">here</a>, where you can choose </p> <pre><code> "Dual Card (SLI)" (GTX 970) in the "Video Card" category, and "none" in the "SLI bridge" category. </code></pre> <p>I think this is the default I saw, which makes me wonder if choosing "none" vs the other options such as "EVGA Pro SLI Bridge V2" 2-way or 3-way matters at all, in terms of general purpose computing.</p> http://asianhospital.com/?id=q/1381067 -1 Do GPU's have caching systems or an operating system like a CPU does? [closed] Mauricio Martinez http://asianhospital.com/?id=users/970247 2018-12-05T16:50:40Z 2018-12-05T18:55:18Z <p>In terms of an operating system, does a GPU have some type of software on it to manage virtual memory, or context switching?</p> <p>And in terms of hardware, does it have some type of caching system, or a TLB? </p> http://asianhospital.com/?id=q/1373979 0 How is it supposed to control GPU clocks and voltages on Linux? xakepp35 http://asianhospital.com/?id=users/539399 2018-11-09T04:41:25Z 2018-11-09T21:52:22Z <p>I have several modern GPUs, both ATIs (lets take Polaris 480) and Nvidia(Lets take 1080) models. I am using them for various purposes, like mining, video rendering, physics processing. Each task is stable and fast at some specific values of four basic parameters - core and memory voltages and clocks.</p> <p>I used to use a Windows and MSI Afterburner, to set up that mentioned parameters. Also i knew utilities like OverdriveNTool.exe, which has command line interface. That claims possibility of such control in Windows.</p> <p>Also, I noted that GPU is a separate computer on its own, almost capable of operating independetly from PC. It has its own bios, RAM and core chip. And OS is not related to GPU operation, because OS is just a "firmware" that runs on CPU, trying to "ask" a GPU for doing something via OS-unrelated protocols. That claims GPU control protocol is OS-independent.</p> <p>Joining together two mentioned claims, I may conclude that it is definetely possible, to modify GPU clocks and voltages on any OS at run time (programmaticaly).</p> <p>I want to do so on Linux (I mean not some specific distro, like Ubuntu, but linux itself - some minimalistic enviroment: <code>kernel+glibc+busybox</code>) What i found so far is:</p> <hr> <p>With <code>amdgpu</code> I found 2 things:</p> <ul> <li><p>first is some odd craft (more like kids toy - who and why put it there? :), that is called powerplay file system and lives in <code>/sys/class/drm/card?/ mclk_od</code> and allows to set 0-20% boost(It even has type of <code>int</code> in kernel amdgpu sources!!!). It does not ever worked on my busybox build and not allowed to set exact parameters.</p></li> <li><p>Things like <a href="https://github.com/matszpk/" rel="nofollow noreferrer">this</a> that utilized AMD ADL, but i failed to find where to download/build that ADL library, and even heard that it is outdated and not works.. Official amdgpu driver download for linux has no such library, idk where to get it.</p></li> </ul> <hr> <p>With <code>nvidia</code> I failed to find any way to control GPU parameters in a minimal enviroment like kernel+glibc+busybox</p> <hr> <p>For now I'm hard-flashing desired frequencies and voltages directly in bios. But that wastes time, requires reboot and wears bios flash write cycles fast. Thus, I'm asking of simple way to set up clocks and voltages.</p> <p>How could i set core clock and voltage in raw linux(no xorg, very little amount of 3rd party libs) for ATI RX 480 and NVidia 1080 video cards?</p> http://asianhospital.com/?id=q/1352525 -2 How do the "games-oriented" NVIDIA-GPU-based cards behave w.r.t. CUDA computation? einpoklum http://asianhospital.com/?id=users/122798 2018-08-26T21:28:12Z 2018-09-05T00:36:43Z <p>NVIDIA makes GPUs, which are used both for graphics and for graphics-unrelated computing. It also publishes compute-related specs of its GPUs.</p> <p>Now, when it comes to <em>cards</em>:</p> <ul> <li>there are some cards that NVIDIA sells itself as the sole vendor, such as the Quadro and the Tesla series (with the latter being solely compute-oriented, lacking any display outputs); these have their own GPUs which you don't see on consumer-oriented cards.</li> <li>There are the consumer-oriented cards which vendors such as MSI, GigaByte and EVGA sell, which are graphics-oriented and use NVIDIA GPUs. These come in all sorts of varieties, such as "SuperClocked", "FTW", "D5", "OC", "Extreme Edition", "Green/Red/Blue/Black" etc.</li> <li>Finally, NVIDIA has started over the past couple of years to sell "Founders' Edition" cards with the same GPUs (chip-wise) used by the other consumer-oriented card vendors.</li> </ul> <p>My question is: Regarding compute work, can I only trust the "Founders Edition" to deliver consistent performance in exact correspondence to the specifications? Or can I also trust the other vendors' cards to not "mess things up", and conform to the specs (modulo stated changes in core and memory clock speeds)? And if I can't fully trust the non-NVIDIA vendor cards, how would they "misbehave"?</p> <p>If the scope is too general, I'm specifically interested in GTX 1080 Ti cards (GP102-350-K1-A1 chips).</p> http://asianhospital.com/?id=q/1328661 104 Why do people use GPUs for high-performance computation instead of a more specialized chip? Alex S http://asianhospital.com/?id=users/708322 2018-06-05T03:06:03Z 2018-06-09T01:30:24Z <p>From my understanding, people began using GPUs for general computing because they are an extra source of computing power. And though they are not a fast as a CPU for each operation, they have many cores, so they can be better adapted for parallel processing than a CPU. This makes sense if you already own a computer that happens to have a GPU for graphics processing, but you don't need the graphics, and would like some more computational power. But I also understand that people buy GPUs <em>specifically</em> to add computing power, with no intention to use them to process graphics. To me, this seems similar to the following analogy:</p> <p>I need to cut my grass, but my lawn mower is wimpy. So I remove the cage from the box fan I keep in my bedroom and sharpen the blades. I duct tape it to my mower, and I find that it works reasonably well. Years later, I am the purchasing officer for a large lawn-care business. I have a sizable budget to spend on grass-cutting implements. Instead of buying lawn mowers, I buy a bunch of box fans. Again, they work fine, but I have to pay for extra parts (like the cage) that I won't end up using. (for the purposes of this analogy, we must assume that lawn mowers and box fans cost about the same)</p> <p>So why is there not a market for a chip or a device that has the processing power of a GPU, but not the graphics overhead? I can think of a few possible explanations. Which of them, if any, is correct?</p> <ul> <li>Such an alternative would be too expensive to develop when the GPU is already a fine option (lawn mowers don't exist, why not use this perfectly good box fan?).</li> <li>The fact that 'G' stands for graphics denotes only an intended use, and does not really mean that any effort goes into making the chip better adapted to graphics processing than any other sort of work (lawn mowers and box fans are the same thing when you get right down to it; no modifications are necessary to get one to function like the other).</li> <li>Modern GPUs carry the same name as their ancient predecessors, but these days the high end ones are not designed to specifically process graphics (modern box fans are designed to function mostly as lawn mowers, even if older one weren't).</li> <li>It is easy to translate pretty much any problem into the language of graphics processing (grass can be cut by blowing air over it really fast).</li> </ul> <p>EDIT:</p> <p>My question has been answered, but based on some of the comments and answers, I feel that I should clarify my question. I'm not asking why everyone doesn't buy their own computations. Clearly that would be too expensive most of the time.</p> <p>I simply observed that there seems to be a demand for devices that can quickly perform parallel computations. I was wondering why it seems that the optimal such device is the Graphics Processing Unit, as opposed to a device designed for this purpose.</p> http://asianhospital.com/?id=q/1170244 2 Why Nvidia Pascal has both FP32 and FP64 cores? Why I can not use them simultaneously? AstrOne http://asianhospital.com/?id=users/688658 2017-01-22T10:09:00Z 2018-03-24T14:42:17Z <p>I am trying to understand Nvidia's GPU architecture but I am a bit stuck on something that appears to be quite simple. Each Streaming Multiprocessor in the Pascal consists of 64xFP32 and 32xFP64 cores. And here are my two questions:</p> <ul> <li>Why did Nvidia put both FP32 and FP64 units in the chip? Why not just put FP64 units that are capable of performing 2xFP32 operations per instruction (like the SIMD instruction sets in CPUs).</li> <li>Why I can't use all FP32 and FP64 units at the same time?</li> </ul> <p>I guess both are hardware design decisions, but I would like to know more details about this topic. Any information on this is more than welcome!</p> <p>EDIT1:</p> <ul> <li>If it is possible to do FP32 and FP64 at the same time, does this mean that a GPU which has 8TFLOPS SP and 4TFLOPS DP can give you (theoretically) 12 TFLOPS mixed TFLOPS? <ul> <li>In case of CUDA, how is this achieved? Do I just use doubles and floats at the same time in my kernel? Or do I need to pass some kind of flag to NVCC?</li> </ul></li> </ul> http://asianhospital.com/?id=q/1298494 3 Debian 9: NVIDIA Card + intel graphics Mauricio http://asianhospital.com/?id=users/876639 2018-02-26T13:30:39Z 2018-02-26T13:30:39Z <p>I have a little bit of a dilemma right now:</p> <p>I have a NVIDIA graphics card and I need to use it for scientific calculations </p> <p>only (GPU). I have Debian 9 installed on my system and I also have fvwm Window</p> <p>Manager set up. The thing is I would like to install cinnamon desktop but, for</p> <p>some reason, after running:</p> <pre><code>apt-get install cinnamon </code></pre> <p>and rebooting, my computer stuck in fallback mode. I think the problem is that </p> <p>the system doesn't know which card to choose (Nvidia or intel). My goal is to </p> <p>have cinnamon running on intel and leave the Nvidia card for calculations.</p> <p>Any idea about how can I make this correctly?</p> http://asianhospital.com/?id=q/1194463 6 Is it possible to mount video cards via USB or Thunderbolt or...? Sy Moen http://asianhospital.com/?id=users/147851 2017-04-01T04:26:20Z 2017-04-04T18:17:07Z <p>I need to build a (cheap) computer that might serve to:</p> <ol> <li>mine digital currency</li> <li>render 3D animations</li> <li>solve SETI problems</li> <li>...etc</li> </ol> <p>Basically I am just using the GPU's to solve math problems. I need very little live throughput to / from the cards.</p> <hr> <p><strong>My Question</strong></p> <hr> <p>Is there a way to:</p> <ol> <li>mount video cards through USB or Thunderbolt or some other chain-able protocol</li> <li>without writing custom drivers</li> <li>on a linux variant</li> </ol> <p>There are some motherboards that support up to 6 PCIe connections, but it would be so much nicer if I could mount as many as the system resources could handle.</p> <hr> <p><strong>Not my question</strong></p> <hr> <ol> <li>You would need to power them some other way. Got it. They all need external power.</li> <li>USB (and maybe even Thunderbolt) doesn't have the throughput for high-vector video throughput. Got it. I am not using these as video cards per-se. <a href="http://asianhospital.com/?id=questions/1116149/pcie-to-usb-thunderbolt-for-graphics-card">PCIE to USB/Thunderbolt for Graphics Card</a></li> </ol> <hr> <p><strong>Other, possibly interesting answers</strong></p> <hr> <ol> <li>There is this clustering solution that... (likes gpu's?)</li> <li>There are these other processors that might be better suited... (asic?)</li> </ol> <hr> <p><strong>Discoveries made since asking the q</strong></p> <hr> <ol> <li><p>Cluster of motherboards so cheap as to be irrelevant compared to the price of the GPU's, see this very <a href="https://www.youtube.com/watch?v=i_r3z1jYHAc" rel="nofollow noreferrer">interesting dissertation project video</a>, alas... Raspberry Pi's and Arduinos don't seem to have PCIe slots. <a href="https://www.solid-run.com/product/hummingboard-carrier-pro" rel="nofollow noreferrer">The HummingBoard-Pro</a> does, but it is $55. My number needs to be under $25 each to be cost effective. Here are others: <a href="http://www.gateworks.com/product/item/ventana-gw5510-single-board-computer" rel="nofollow noreferrer">Gateworks</a> Price Unk, <a href="https://software.intel.com/en-us/iot/hardware/galileo" rel="nofollow noreferrer">Intel Galileo</a> with mPCI, $45 each.</p> <ul> <li>Samuel Cozennat gives us a gorgeous (but expensive) <a href="https://hackernoon.com/installing-a-diy-bare-metal-gpu-cluster-for-kubernetes-364200254187" rel="nofollow noreferrer">example</a> using Intel NUC's. He includes the hardware build and provisioning setup. Very nice, Sam! Thanks. </li> </ul></li> <li><p>PCI-e can be split somewhat like USB and Thunderbolt... who knew? Here are a couple limited splitters: <a href="http://amfeltec.com/splitters-gpu-oriented/" rel="nofollow noreferrer">Amfeltec</a>, <a href="http://www.aliexpress.com/item/Free-shipping-1-to-3-PCI-express-1X-slots-Riser-Card-PCIe-x1-to-external-3/32292624969.html" rel="nofollow noreferrer">C0C0C3</a>. The <a href="https://en.wikipedia.org/wiki/PCI_Express" rel="nofollow noreferrer">PCIe spec</a> indicates that it could theoretically support 32 1x devices.</p></li> <li><p>Thunderbolt <a href="https://www.techinferno.com/index.php?/forums/topic/5406-making-a-us68-thunderboltexii-pcie-based-tb2-egpu-adapter/" rel="nofollow noreferrer">has the capability</a> (especially for low / non-video data-rates), but existing bios / mainboard / driver setups are not generally developed. There are some <a href="https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&amp;field-keywords=thunderbolt%20pcie" rel="nofollow noreferrer">existing products</a> that target laptops.</p></li> </ol> http://asianhospital.com/?id=q/1190320 3 How to set the default GPG encryption key? user123456 http://asianhospital.com/?id=users/648705 2017-03-19T22:05:03Z 2017-03-24T19:07:37Z <p>Say I am using only one encryption key most the time. </p> <p><strong>How do I set the default encryption key in order to avoid mentioning it in the encryption command</strong></p> <p>In other word, I want this command:</p> <pre><code>gpg -e </code></pre> <p>to be equivalent to the command with the recipient</p> <pre><code>gpg -e -r reciever@mail.edu </code></pre> http://asianhospital.com/?id=q/1170703 0 GeForce GTX 750 Ti Display Issue Saim Mehmood http://asianhospital.com/?id=users/411883 2017-01-23T18:48:06Z 2017-01-23T18:51:58Z <p><a href="https://i.stack.imgur.com/9cdKN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9cdKN.jpg" alt="enter image description here"></a></p> <p>[Perspective crop - original image: <a href="https://i.stack.imgur.com/kg1DP.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/kg1DP.jpg</a> ]</p> <p>To resolve the above shown issue I have tried 3 different versions of drivers and also changed my desktop PC. But the problem is still there.</p> <p>Initially the card was working fine. I need to perform General Purpose Computing (GPGPU) using this card.</p> <ol> <li>372.54-desktop-win10-64bit-international-whql</li> <li>375.95-desktop-win10-64bit-international-whql (clean installed this one)</li> <li>376.33-desktop-win10-64bit-international-whql</li> </ol> <p>System Configuration:</p> <ul> <li>Intel Core i5-4590</li> <li>RAM 4 GB</li> <li>OS: Windows 10 Enterprise 64-bit </li> <li>Card: NVIDIA GeForce GTX 750 Ti</li> </ul> <p>Is there any possible solution?</p> <p>Thank you!</p> http://asianhospital.com/?id=q/1155680 1 GPGPU and motherboard compatibility Islam Sabyrgaliyev http://asianhospital.com/?id=users/674021 2016-12-12T11:04:29Z 2017-01-10T15:37:49Z <p>How can I verify the compatibility of a motherboard with GPU cards that have over 4GB memory, such as Tesla K40, K80, Titan X, etc.?</p> <p>The problem is, Tesla K40 does not work properly on AMD SuperMicro servers. Searching forums shows that the motherboard must support some kind of BAR region over 4GB addressing. <strong>Which parameter defines this feature, i.e. what should we look at before purchasing the motherboard?*</strong></p> http://asianhospital.com/?id=q/1159447 0 How to have graphics enabled in multiple VMs without dedicated pass-through (GPU emulation?) Rakshith Ravi http://asianhospital.com/?id=users/678094 2016-12-22T23:54:42Z 2016-12-22T23:54:42Z <p>Is there a way to emulate GPUs in a VM without doing an actual GPU-passthrough?</p> <p>Now now, before y'all close this question saying "GPU emulation is not possible, it won't increase performance", let me explain:</p> <p>So I have a system which will run multiple (like around 20 or so) virtual machines (yeah, its more like a server. No, this question does not belong to Server Fault).<br> Each of those machines will have a desktop environment in them (probably running windows, but I haven't decided that yet) and will be used as a normal desktop by an average Joe (watching movies, Office software, etc) by remotely logging in.</p> <p>The only problem I'm facing in this is, if I have so many virtual machines each requiring basic graphics requirements, how will I allocate the GPUs accordingly? I definitely cannot afford to have a separate GPU for each VM and do a GPU pass-through for each VM.</p> <p>Clearly I'm missing something here. I'd like to know if there is a way to maybe have one big powerful GPU installed and accordingly split it up by perhaps emulating a GPU in each VM, and then give the graphics work to the host GPU. Is there anyway to do this?</p> <p>P.S, I'm a noob to GPU emulation, please explain elaborately</p> http://asianhospital.com/?id=q/1152131 1 Hardware limitations in data transfer (high throughput) Daimonie http://asianhospital.com/?id=users/670417 2016-12-02T09:44:31Z 2016-12-02T09:44:31Z <p>I am currently looking into the hardware limitations for a scientific setup. We are running into high-load related loss of data. I will first explain the problem and propose a solution, which I hope you can verify.</p> <p>We have a camera providing four 120px x 120px images at 10kHz. These are gathered by a frame grabber (NI PCIe-1433). The frame grabber is connected to a PCI slot.</p> <p>If I understood correctly, the data will transfer from the frame grabber to the CPU. (Frame grabber -> bus -> south bridge -> bus -> north bridge -> front side bus -> cpu -> memory controller on-chip -> bus -> RAM?)</p> <p>We then load the data onto the high-end gpu, which means the CPU requests the data from RAM (RAM->bus->CPU memory controller?) and loads it to the GPU (CPU -> front side bus -> north bridge -> bus -> NVidia GPU?).</p> <p>The frame grabber specifications itself are quite clear, and it should be able to handle it. The current thinking is that the double CPU load (writing to RAM; RAM -> GPU) is causing a bottleneck. The likely fixes are then to either upgrade the CPU to a higher single-clock speed model and/or to upgrade the RAM.</p> <p>I'm also looking for a resource that succinctly explains these data transfers (probably without the frame grabber) and how to assess the speeds and find optional bottlenecks.</p> http://asianhospital.com/?id=q/1133829 1 Linux: How to use onboard graphics card for graphics and not the dedicated GPGPU? mike van der naald http://asianhospital.com/?id=users/651203 2016-10-11T21:54:48Z 2016-10-25T10:47:09Z <p>I recently bought a GPGPU (a Nvidia GEFORCE GTX 950 card) so I could use CUDA wrappers in my C code. After installing CUDA 8.0 and plugging in my monitors into my onboard graphics card (not the GPGPU), I run "nvidia-smi" and I see the following:</p> <pre><code>+-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1054 G /usr/lib/xorg/Xorg 305MiB | | 0 1805 G compiz 84MiB | | 0 4179 G ...MainFrame --force-fieldtrials=*AppBannerT 80MiB | | 0 5224 G unity-control-center 1MiB | | 0 6925 C python3 52MiB | +-----------------------------------------------------------------------------+ </code></pre> <p>Python3 is the only thing I actually want using this device. How can I ensure that my GPGPU is not being used by Xorg or any process that is for graphics? I know my onboard graphics card can use two monitors no problem, so I would really like it if these processes were instead ran on that. </p> <p>In case it matters, I am running Ubuntu 16.04 on an ASUS machine.</p> http://asianhospital.com/?id=q/1136452 1 How Shaders map to actual GPU hardware Boagz http://asianhospital.com/?id=users/618638 2016-10-19T04:24:22Z 2016-10-19T08:41:41Z <p>In an attempt to better understand GPU's and GPU programming, I would like to get a better mental picture of shaders and how they are implemented on the GPU. Is there a 1 to 1 relationship between a shader program and a GPU core? So does a vertex shader program run on one core while say the fragment shader run on another core? Then is data passed from the vertex shader core to the fragment shader core? Or is each individual core on a GPU responsible for all the shaders and the entire graphics pipeline? Meaning one GPU core contains the vertex shader, tessellation shader, geometry shader, etc. and each core will output a final pixel. Any information to help solidify my mental picture would be useful. </p> http://asianhospital.com/?id=q/1112910 0 Do unused GPU cores get used if I add dedicated graphics card? lukehawk http://asianhospital.com/?id=users/629305 2016-08-12T17:25:55Z 2016-10-02T17:54:34Z <p>If this question is dumb, I apologize. I am a somewhat advanced novice in all things hardware, and still learning.</p> <p>I have an AMD A8-7600. It has 10 cores (4 CPU + 6 GPU). I needed multiple (4) monitors, so I added two graphics cards.</p> <p>I disabled the integrated graphics. So does the processor use those 6 GPU cores for anything? I found <a href="http://asianhospital.com/?id=questions/420925/what-happens-when-you-add-a-graphics-card-to-a-i7-with-built-in-graphics-e-g-h">this</a>, but the comments there are more for intel chip. And they are arguing over whether the CPU cores perform better after adding a dedicated graphics card, not really addressing what happens to the GPUs. The consensus seems to be a faster CPU without the heat generated from GPU and better performance without sharing RAM with GPU. This implies the GPUs are sitting idle, but I am not sure if that's the case, and whether it holds for both Intel and AMD.</p> <p>My question is more about what those 6 cores DO now that graphics are taken care of. Do the GPU cores sit idle? Or do they get tasked? Is there a way to test this?</p> <p>(What I would like to hear is that these cores are available. Part of the reason I built this rig was to be able to run several million statistical simulations. That would be helped considerably by being able to run 6 or 8 cores at a time in parallel, instead of just 2 or 3.)</p> http://asianhospital.com/?id=q/308771 364 Why are we still using CPUs instead of GPUs? ell http://asianhospital.com/?id=users/55276 2011-07-10T13:31:41Z 2016-07-06T13:38:52Z <p>It seems to me that these days lots of calculations are done on the GPU. Obviously graphics are done there, but using CUDA and the like, AI, hashing algorithms (think bitcoins) and others are also done on the GPU. Why can't we just get rid of the CPU and use the GPU on its own? What makes the GPU so much faster than the CPU?</p> http://asianhospital.com/?id=q/446883 2 Different drivers for different Nvidia GPUs in the same system nwhsvc http://asianhospital.com/?id=users/138811 2012-07-10T00:10:26Z 2016-04-14T19:31:36Z <p>I have the following two video cards installed in my Arch Linux system: </p> <pre> $ lspci | grep -i vga 01:00.0 VGA compatible controller: NVIDIA Corporation G84 [Quadro FX 1700] (rev a1) 02:00.0 VGA compatible controller: NVIDIA Corporation GF100 [Tesla C2050 / C2070] (rev a3) </pre> <p>I am interested in using the Quadro FX 1700 for the display with the open source <a href="http://nouveau.freedesktop.org/wiki/" rel="nofollow noreferrer">Nouveau</a> driver. I want to use the Tesla C2070 card for CUDA development using the <a href="http://developer.nvidia.com/cuda-downloads" rel="nofollow noreferrer">Nvidia driver</a>.</p> <p>Currently, I am using the <code>nvidia</code> and <code>nvidia-utils</code> packages for both the display and CUDA. However, I get better display stability when I am using Nouveau. If I install the package <code>xf86-video-nouveau</code> and comment out the blacklist in <code>/usr/lib/modprobe.d/nvidia.conf</code>, then Nouveau appears to take over both cards. In this state <code>nvidia-smi</code> returns the following:</p> <pre> $ nvidia-smi NVIDIA: could not open the device file /dev/nvidiactl (No such file or directory). NVIDIA-SMI has failed because it couldn't communicate with NVIDIA driver. Make sure that latest NVIDIA driver is installed and running. </pre> <p>Is it possible to "attach" Nouveau only to the Quadro and let the Nvidia driver pick up the Tesla?</p> http://asianhospital.com/?id=q/683853 -1 Nvidia: GPU monitor application mrDataos http://asianhospital.com/?id=users/0 2013-12-02T20:38:55Z 2016-04-05T19:08:50Z <p>What software can show statistics about the usage of the GPU in a graphic interface on Linux. I know that there an <a href="https://developer.nvidia.com/nvidia-system-management-interface" rel="nofollow noreferrer">Nvidia System Management</a> which provided the command nvidia-smi -l 1 which shows information in the command line about GPU fan temperature, memory usage and GPU utilization, but I want something more graphic. There is also a <a href="http://www.matrix44.net/blog/?p=876" rel="nofollow noreferrer">script</a> very similar to nvidia-smi that shows percentage of gpu usage and memory usage. I use fedora 16 and the graphic card is a Tesla C1060. To summarize, is there a graphic GPU monitor for linux?</p> http://asianhospital.com/?id=q/522051 4 Tesla C2075 as a VGA KovBal http://asianhospital.com/?id=users/2365 2012-12-20T14:35:41Z 2016-03-10T14:17:24Z <p>I'd like to use a Tesla C2075's <s>VGA (D-SUB)</s> DVI output. I installed the latest Quadro driver (as suggested by the NVIDIA driver finder) on the 64-bit Windows 7, but it doesn't seem to be working. E.g. I can't use Aero, can't set a larger resolution than 1600x1200, etc.</p> <p>Is it possible, that the card doesn't function as a graphics card? Or should I install some other drivers?</p> <hr> <p><strong>Update</strong>: the D-SUB output is actually a DVI-I output with the appropriate (passive) adapter.<br/> The driver is obviously working, because OpenCL applications recognize the device, and they can use it.</p> http://asianhospital.com/?id=q/943736 1 Use Nvidia GPU as primary video output and AMD GPU as OpenCL on arch Falxmen http://asianhospital.com/?id=users/471974 2015-07-22T07:10:56Z 2016-03-08T17:28:00Z <p>How can I use my Nvidia Titan X as my main video output with proprietary driver and an AMD Fury X as a OpenCL card?</p> <p>Is it possible to have Catalyst and Nvidia proprietary driver installed at the same time?</p> http://asianhospital.com/?id=q/394031 2 Mixing AMD and NVIDIA graphics cards on one system [duplicate] Dmitri Nesteruk http://asianhospital.com/?id=users/52734 2012-02-25T20:12:26Z 2016-03-08T08:31:52Z <blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="http://asianhospital.com/?id=questions/92374/multiple-video-cards-on-windows-7">Multiple video cards on Windows 7</a> </p> </blockquote> <p>I have a bit of a problem with graphics cards that I'd appreciate advice on. </p> <p>On the one hand, I need an <strong>ATI</strong> graphics card, because a single Eyefinity card is perfect for running 6 monitors at 1080p via DisplayLink. It's awesome and though I know that I can replicate it with NVIDIA, it appears to be something that doesn't come out of the box.</p> <p>On the other hand, I want an <strong>NVIDIA</strong> card because I want to do CUDA. This card will not drive any monitors, it's purely for floating-point calculations.</p> <p>So my questions is: is it possible to mix the two on one system? Will there be any problems with drivers, motherboard, etc? And if not - what are my alternatives? (I.e., is it possible to drive 6 monitors <em>and</em> have a card that CUDA can be debugged on? Note that I won't consider business-grade graphics cards.)</p> <p><strong>Update:</strong> 6 different screens, Windows 7.</p> http://asianhospital.com/?id=q/246909 6 ATI Mobility Radeon HD 4500 - Support OpenCL or not? Randall Flagg http://asianhospital.com/?id=users/52969 2011-02-17T09:52:20Z 2016-02-16T09:07:23Z <p>I wanted to run Folding@Home on my laptop. I downloaded version 6.41 which supports GPGPU for ATI with the r_700 switch. It seemd too work slow so I took a look with GPU-Z and I was puzzled.</p> <p>ATI Mobility Radeon HD 4500 - Support OpenCL or not?</p> <p>I thought it does but according to GPU-Z it doesn't. Can anyone clear things for me?</p> <p><img src="https://i.stack.imgur.com/nxaTJ.png" alt="GPU-Z"></p> http://asianhospital.com/?id=q/942927 -1 How to enable the disabled Streaming Processors (SM)? skm http://asianhospital.com/?id=users/426409 2015-07-20T15:04:41Z 2015-07-21T13:05:47Z <p>I am using NVIDIA Quadro K2000 GPU. I ran <code>deviceQuery.exe</code>, the results of which are below. It says that I have only 2 SM units. I am not sure if I really have only 2 SMs or some of my SMs are disabled as mentioned at the <a href="https://stackoverflow.com/questions/16639766/does-multiprocessorcount-gives-the-number-of-streaming-multiprocessors">third comment at this SO question</a>.</p> <p>I also saw that the number of SP are 192 per SM. May be there is some way to enable more SM and then the number of SP per SM will decreased.</p> <p><a href="https://i.imgur.com/T4rfxCy.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/T4rfxCy.png" alt="image"></a></p> <p><strong>Update:</strong> The reason I am asking is the fact that I want to make my GPU processing efficient. I have an image of 1080 x 1920 which I have divided into three segments. I am transferring these segments H2D, processing and transferring D2H asynchronously. Therefore, I want to choose such a dimension of blocks and number of threads per block that can efficiently utilize the hardware configuration of my GPU. Furthermore, I am still confused that a GPU having more than 2 SMs (let's say 8 SMs) but less number of SPs per SM (384/8 = 48 SP per SM) would perform same to a GPU having 2 SMs and 192 SPs per SM ? <strong><em>I mean, is it the total number of available SPs which matters?</em></strong> </p> http://asianhospital.com/?id=q/805029 -3 GPU's performance mainly depends on which factor [closed] Hassaan Salik http://asianhospital.com/?id=users/363192 2014-08-30T09:56:35Z 2014-08-30T11:03:16Z <p>There are many specs given about a GPU. I want to know by which single factor its performance can be measured; would it be based on pixels/sec, clock speed, number of cores, amount of memory, memory speed, flops/sec or any other thing. I want to know the answer both for graphic processing and mathematical computations.</p> http://asianhospital.com/?id=q/589336 4 Execute OpenCL code on the CPU Misch http://asianhospital.com/?id=users/144416 2013-04-29T19:11:44Z 2014-01-05T09:36:56Z <p>I want to execute <a href="https://en.wikipedia.org/wiki/OpenCL" rel="nofollow noreferrer">OpenCL</a> code on a PC which doesn't have a graphics card, nor any other hardware component which is able to execute OpenCL. Is it possible compile my OpenCL code in a way that it can be executed on the CPU in Linux? Or is it possible to simulate a GPU environment on the CPU? </p> <p>Note: It's about testing whether the code works as expected, not about performance. </p> http://asianhospital.com/?id=q/655299 2 GPU Processing overkill? Ben Franchuk http://asianhospital.com/?id=users/145025 2013-10-07T04:32:22Z 2013-10-07T08:12:03Z <p>Is there such a point where using GPU Processing or Coprocessors (such as the Intel Xeon PHI card or the Nvidia Tesla card) can actually reduce the speed of which a software computes data?</p> <p>Say I had a massive cluster of external PCI-E Expansions (like this one <a href="http://www.cyclone.com/products/expansion_systems/FAQ.php" rel="nofollow noreferrer">http://www.cyclone.com/products/expansion_systems/FAQ.php</a>), all connected to the same computer. Due to the data having to be distributed over the expansions and the GPUs within said expansions, would it not theoretically actually <em>slow</em> the rate at which data gets processed?</p> <p>Just wondering. If this is not the case, why?</p> http://asianhospital.com/?id=q/465116 4 Options for scalable commodity GPU servers for CUDA? Dave S http://asianhospital.com/?id=users/153978 2012-08-23T00:28:19Z 2013-09-08T15:09:23Z <p>I'm doing some machine learning work that benefits tremendously from using the GPU. I'm kind of at the limits of my current setup (A workstation with a single GTX580) and I really don't have room for another computer at home. So I'm looking to build a GPU Server (and very possible several of them) and trying to find the most cost effective way to do so.</p> <p>Ideally I'd like to build something like NVidia's tesla servers (e.g. s2075), but with GTX580s instead of Tesla cards. This fits 4 cards into a 1u chassis which is then connected via PCI-e extenders to a host system. A DIY version of this doesn't seem to exist.</p> <p>So my next plan is going 4u and basically putting a standard quad SLI build in that. I'd probably use 2 850watt PSUs to power the 4 cards. Cooling could also be an issue.</p> <p>So my questions are specifically this:</p> <ul> <li>If I'm primarily using the GPU and only using the CPU for handling basic logic and stuff, is it reasonable to use a low end CPU like an i3?</li> <li>If I want to co-locate, wouldn't this be fairly expensive/use a lot of power?</li> <li>Am I going about this the wrong way and is there a much easier/more cost effective way to build GPU number crunchers and not keep them in my apartment?</li> </ul>