
Rethinking AI Compute Infrastructure: The TensorWave ApproachIn this episode, Jeff Tatarchuk, co-founder of TensorWave, shares how his deep industry experience and innovative mindset are transforming AI compute infrastructure. We explore how building specialized data centers, focusing on AMD GPUs, and creating flexible ecosystems are shaping the future of scalable AI.
In this episode:
Timestamps:00:00 – Introduction to TensorWave and the AI compute landscape
02:30 – The rise of Neo clouds and innovation waves in cloud infrastructure
06:00 – How TensorWave’s FPGA cloud background shaped its GPU strategy
10:00 – Challenges in deploying large data centers: power, supply chain, and permitting
14:00 – Building and scaling AMD GPU data centers quickly and efficiently
19:00 – Software ecosystems: the CUDA moat and TensorWave’s ‘Beyond CUDA’ summit
23:00 – Market differentiation: technical and operational challenges in the Neo cloud space
27:00 – Supporting enterprise fine tuning and large-scale training demands
32:00 – AMD’s technical advantages: VRAM, chiplet architecture, and software support
36:00 – Building an open, heterogeneous AI ecosystem beyond CUDA
40:00 – What success looks like: a resilient, accessible AI compute future
Resources & Links:
This conversation offers a strategic look at how focused infrastructure development, software ecosystem support, and hardware differentiation are critical in shaping the future of accessible, scalable AI compute. Whether you're building data centers, developing AI hardware, or just interested in industry shifts, this episode provides valuable insights into how companies like TensorWave are reshaping the landscape.