mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

#cuda

2 posts2 participants1 post today
GoCV<p>GoCV 0.42 is out with support for the latest <span class="h-card" translate="no"><a href="https://mastodon.social/@opencv" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>opencv</span></a></span> 4.12, new CUDA functions, ViT DNN tracking, and lots more!</p><p>Full release notes here: <a href="https://github.com/hybridgroup/gocv/releases/tag/v0.42.0" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/hybridgroup/gocv/re</span><span class="invisible">leases/tag/v0.42.0</span></a></p><p>Go get it right now!</p><p><a href="https://mastodon.social/tags/golang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>golang</span></a> <a href="https://mastodon.social/tags/opencv" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opencv</span></a> <a href="https://mastodon.social/tags/computerVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computerVision</span></a> <a href="https://mastodon.social/tags/dnn" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dnn</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.social/tags/vision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vision</span></a></p>
Nicolas MOUART<p>The Nvidia RTX A2000 6GB is not the best by far, it lacks VRAM. But computing wise and in term of fair use, with patience, @70W it is a really good deal. It is silent, small, and it works on older computers (no need to change the PSU). I think great for machine learning rather than generative AI, it supports <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> (it is for working e.g not gaming).</p><p>NB: non sponsored review</p><p><a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> <a href="https://mastodon.social/tags/hardware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hardware</span></a> <a href="https://mastodon.social/tags/review" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>review</span></a> <a href="https://mastodon.social/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a></p>
ACCUConf<p><a href="https://youtu.be/tDegOeivJs4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/tDegOeivJs4</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cplusplus</span></a> <a href="https://mastodon.social/tags/cpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpp</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.social/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a></p>
Loki the Cat<p>Nvidia's CUDA Platform Now Supports RISC-V! 🎉 The proprietary giant is embracing open architecture, letting RISC-V CPUs orchestrate GPU workloads alongside their traditional x86/Arm siblings. Announced at China summit - interesting timing for keeping CUDA thriving globally 🤔</p><p><a href="https://hardware.slashdot.org/story/25/07/22/2042234/nvidias-cuda-platform-now-support-risc-v" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hardware.slashdot.org/story/25</span><span class="invisible">/07/22/2042234/nvidias-cuda-platform-now-support-risc-v</span></a></p><p><a href="https://toot.community/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> <a href="https://toot.community/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://toot.community/tags/RISCV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RISCV</span></a></p>
It's FOSS<p>CUDA comes to RISC-V!</p><p><a href="https://news.itsfoss.com/nvidia-cuda-risc-v/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.itsfoss.com/nvidia-cuda-r</span><span class="invisible">isc-v/</span></a></p><p><a href="https://mastodon.social/tags/riscv" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>riscv</span></a> <a href="https://mastodon.social/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a></p>
knoppix<p>In a surprise move, NVIDIA is bringing CUDA to RISC-V CPUs 💥<br>Announced at RISC-V Summit China , this allows RISC-V processors to run CUDA drivers + logic, with NVIDIA GPUs handling compute tasks ⚙️<br>Enables open CPU + proprietary GPU AI systems—big for edge, HPC &amp; China’s chipmakers 🇨🇳</p><p>A potential shift in global AI infrastructure 🌐</p><p><span class="h-card" translate="no"><a href="https://mastodon.social/@itsfoss" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>itsfoss</span></a></span></p><p><a href="https://news.itsfoss.com/nvidia-cuda-risc-v/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.itsfoss.com/nvidia-cuda-r</span><span class="invisible">isc-v/</span></a></p><p><a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/RISCV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RISCV</span></a> <a href="https://mastodon.social/tags/OpenHardware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenHardware</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> <a href="https://mastodon.social/tags/EdgeComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EdgeComputing</span></a> <a href="https://mastodon.social/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a> <a href="https://mastodon.social/tags/GeForce" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GeForce</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechNews</span></a></p>
Kathy Reid<p>By Anton Shilov for <span class="h-card" translate="no"><a href="https://flipboard.com/@TomsHardware" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>TomsHardware</span></a></span> - <a href="https://aus.social/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> <a href="https://aus.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> now supports <a href="https://aus.social/tags/RISCV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RISCV</span></a> - is this a signal of broader ecosystem support? </p><p><a href="https://www.tomshardware.com/pc-components/gpus/nvidias-cuda-platform-now-supports-risc-v-support-brings-open-source-instruction-set-to-ai-platforms-joining-x86-and-arm" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tomshardware.com/pc-components</span><span class="invisible">/gpus/nvidias-cuda-platform-now-supports-risc-v-support-brings-open-source-instruction-set-to-ai-platforms-joining-x86-and-arm</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> Bringing <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> To <a href="https://hachyderm.io/tags/RISCV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RISCV</span></a><br>NVIDIA's drivers and CUDA software stack are predominantly supported on x86_64 and AArch64 systems but in the past was supported on IBM POWER. This week at the RISC-V Summit China event, NVIDIA's Frans Sijstermans announced that CUDA will be coming to RISC-V.<br><a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> for their part with the upstream <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://hachyderm.io/tags/AMDKFD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMDKFD</span></a> kernel compute driver can already build on RISC-V and the <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> user-space components can also be built on RISC-V. <br><a href="https://www.phoronix.com/news/NVIDIA-CUDA-Coming-To-RISC-V" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/NVIDIA-CUDA-</span><span class="invisible">Coming-To-RISC-V</span></a></p>
heise online English<p>Apple AI framework MLX: future support for Nvidia's CUDA</p><p>Although Nvidia GPUs no longer run in Macs, Apple's MLX will soon be running there too. This makes interesting ports possible.</p><p><a href="https://www.heise.de/en/news/Apple-AI-framework-MLX-future-support-for-Nvidia-s-CUDA-10493373.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/en/news/Apple-AI-fram</span><span class="invisible">ework-MLX-future-support-for-Nvidia-s-CUDA-10493373.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon</span></a></p><p><a href="https://social.heise.de/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> <a href="https://social.heise.de/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://social.heise.de/tags/IT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IT</span></a> <a href="https://social.heise.de/tags/K%C3%BCnstlicheIntelligenz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KünstlicheIntelligenz</span></a> <a href="https://social.heise.de/tags/macOS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>macOS</span></a> <a href="https://social.heise.de/tags/Mobiles" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mobiles</span></a> <a href="https://social.heise.de/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> <a href="https://social.heise.de/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a></p>
Mac & i<p>Apple-KI-Framework MLX: Künftig Support für Nvidias CUDA</p><p>Zwar laufen in Macs keine Nvidia-GPUs mehr, dennoch soll Apples MLX nun bald auch dort laufen. Das macht interessante Portierungen möglich.</p><p><a href="https://www.heise.de/news/Apple-KI-Framework-MLX-Kuenftig-Support-fuer-Nvidias-CUDA-10491534.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/news/Apple-KI-Framewo</span><span class="invisible">rk-MLX-Kuenftig-Support-fuer-Nvidias-CUDA-10491534.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon</span></a></p><p><a href="https://social.heise.de/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> <a href="https://social.heise.de/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://social.heise.de/tags/IT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IT</span></a> <a href="https://social.heise.de/tags/K%C3%BCnstlicheIntelligenz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KünstlicheIntelligenz</span></a> <a href="https://social.heise.de/tags/macOS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>macOS</span></a> <a href="https://social.heise.de/tags/Mobiles" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mobiles</span></a> <a href="https://social.heise.de/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> <a href="https://social.heise.de/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a></p>
ACCUConf<p><a href="https://youtu.be/tDegOeivJs4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/tDegOeivJs4</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cplusplus</span></a> <a href="https://mastodon.social/tags/cpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpp</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.social/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a></p>
st1nger :unverified: 🏴‍☠️ :linux: :freebsd:<p><a href="https://infosec.exchange/tags/GPUHammer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPUHammer</span></a> is the first attack to show <a href="https://infosec.exchange/tags/Rowhammer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Rowhammer</span></a> bit flips on <a href="https://infosec.exchange/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> memories, specifically on a GDDR6 memory in an <a href="https://infosec.exchange/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> A6000 GPU. Our attacks induce bit flips across all tested DRAM banks, despite in-DRAM defenses like TRR, using user-level <a href="https://infosec.exchange/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://infosec.exchange/tags/code" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>code</span></a>. These bit flips allow a malicious GPU user to tamper with another user’s data on the GPU in shared, time-sliced environments. In a proof-of-concept, we use these bit flips to tamper with a victim’s DNN models and degrade model accuracy from 80% to 0.1%, using a single bit flip. Enabling Error Correction Codes (ECC) can mitigate this risk, but ECC can introduce up to a 10% slowdown for <a href="https://infosec.exchange/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://infosec.exchange/tags/inference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>inference</span></a> workloads on an <a href="https://infosec.exchange/tags/A6000" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>A6000</span></a> GPU.</p><p><a href="https://gpuhammer.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">gpuhammer.com/</span><span class="invisible"></span></a></p>
IT News<p>Chinese firms rush for Nvidia chips as US prepares to lift ban - Chinese firms have begun rushing to order Nvidia's H20 AI ch... - <a href="https://arstechnica.com/information-technology/2025/07/nvidia-to-resume-china-ai-chip-sales-after-huang-meets-trump/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/07/nvidia-to-resume-china-ai-chip-sales-after-huang-meets-trump/</span></a> <a href="https://schleuss.online/tags/aiinfrastructure" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiinfrastructure</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/exportcontrols" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>exportcontrols</span></a> <a href="https://schleuss.online/tags/semiconductors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semiconductors</span></a> <a href="https://schleuss.online/tags/donaldtrump" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>donaldtrump</span></a> <a href="https://schleuss.online/tags/jensenhuang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>jensenhuang</span></a> <a href="https://schleuss.online/tags/bytedance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bytedance</span></a> <a href="https://schleuss.online/tags/deepseek" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deepseek</span></a> <a href="https://schleuss.online/tags/aichips" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aichips</span></a> <a href="https://schleuss.online/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://schleuss.online/tags/chatgtp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgtp</span></a> <a href="https://schleuss.online/tags/tencent" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tencent</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/policy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>policy</span></a> <a href="https://schleuss.online/tags/huawei" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>huawei</span></a> <a href="https://schleuss.online/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://schleuss.online/tags/aigpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aigpu</span></a> <a href="https://schleuss.online/tags/china" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>china</span></a> <a href="https://schleuss.online/tags/omdia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>omdia</span></a> <a href="https://schleuss.online/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://schleuss.online/tags/amd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>amd</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
Hacker News 50<p>Apple's MLX adding CUDA support</p><p>Link: <a href="https://github.com/ml-explore/mlx/pull/1983" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ml-explore/mlx/pull</span><span class="invisible">/1983</span></a><br>Discussion: <a href="https://news.ycombinator.com/item?id=44565668" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">4565668</span></a></p><p><a href="https://social.lansky.name/tags/apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>apple</span></a> <a href="https://social.lansky.name/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a></p>
HGPU group<p>Hardware Compute Partitioning on NVIDIA GPUs for Composable Systems</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/TaskScheduling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TaskScheduling</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=30037" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30037</span><span class="invisible"></span></a></p>
HGPU group<p>Demystifying NCCL: An In-depth Analysis of GPU Communication Protocols and Algorithms</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/GPUcluster" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPUcluster</span></a> <a href="https://mast.hpc.social/tags/Communication" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Communication</span></a></p><p><a href="https://hgpu.org/?p=30035" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30035</span><span class="invisible"></span></a></p>
ACCUConf<p><a href="https://youtu.be/tDegOeivJs4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/tDegOeivJs4</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cplusplus</span></a> <a href="https://mastodon.social/tags/cpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpp</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.social/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>New <a href="https://hachyderm.io/tags/ZLUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZLUDA</span></a> 5 Preview Released For <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> On Non-NVIDIA <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a><br>For now this ability to run unmodified CUDA apps on non-<a href="https://hachyderm.io/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> GPUs is focused on <a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> GPUs of the <a href="https://hachyderm.io/tags/Radeon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Radeon</span></a> RX 5000 series and newer, which is AMD Radeon GPUs with <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a>. Besides CUDA code samples, GeekBench has been one of the early targets for testing. <br><a href="https://www.phoronix.com/news/ZLUDA-5-preview.43" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/ZLUDA-5-prev</span><span class="invisible">iew.43</span></a></p>
ACCUConf<p><a href="https://youtu.be/tDegOeivJs4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/tDegOeivJs4</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cplusplus</span></a> <a href="https://mastodon.social/tags/cpp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpp</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.social/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a></p>
IT News<p>AI mania pushes Nvidia to record $4 trillion valuation - On Wednesday, Nvidia became the first company in history to ... - <a href="https://arstechnica.com/ai/2025/07/ai-mania-pushes-nvidia-to-record-4-trillion-valuation/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/07/ai-</span><span class="invisible">mania-pushes-nvidia-to-record-4-trillion-valuation/</span></a> <a href="https://schleuss.online/tags/largelanguagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>largelanguagemodels</span></a> <a href="https://schleuss.online/tags/aiinfrastructure" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiinfrastructure</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/exportcontrols" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>exportcontrols</span></a> <a href="https://schleuss.online/tags/semiconductors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semiconductors</span></a> <a href="https://schleuss.online/tags/generativeai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeai</span></a> <a href="https://schleuss.online/tags/jensenhuang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>jensenhuang</span></a> <a href="https://schleuss.online/tags/stockmarket" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>stockmarket</span></a> <a href="https://schleuss.online/tags/microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microsoft</span></a> <a href="https://schleuss.online/tags/aichips" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aichips</span></a> <a href="https://schleuss.online/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://schleuss.online/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://schleuss.online/tags/aigpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aigpu</span></a> <a href="https://schleuss.online/tags/apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>apple</span></a> <a href="https://schleuss.online/tags/china" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>china</span></a> <a href="https://schleuss.online/tags/cnbc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cnbc</span></a> <a href="https://schleuss.online/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://schleuss.online/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>