Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RISC-V was originally developed because some vector-processor / ML people at Berkeley needed an extensible control processor for their specialized hardware.

They'd previously been using ancient 32 bit MIPS but they needed 64 bit, a good amount of spare opcode space for custom instructions, and reasonable licensing and nothing suitable existed so they rolled their own.

RISC-V with the almost-done Vector extension is likely to be a big force in ML hardware.



To add to what you said, a 64 bit ARM ISA didn’t publicly exist yet when Berkeley started RISC-V.


ML hardware has more to gain from FPGAs.


Based on what?

Whatever number of ALUs / DSP slices you can put in an FPGA and soft-wire together, you can put just as many hard-wired in a custom SoC with lower area and cost, and faster performance.

An FPGA is good for prototyping this until you figure out the best arrangement, sure, but three months later you can have real chips.


Based on Google's TPUs, and Microsoft's Brainwave.


TPUs are not FPGAs.


> Tensor Processing Units (TPUs) are Google’s custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning workloads.

https://cloud.google.com/tpu/docs/tpus?hl=en

> Project Brainwave is a deep learning platform for real-time AI inference in the cloud and on the edge. A soft Neural Processing Unit (NPU), based on a high-performance field-programmable gate array (FPGA)

https://www.microsoft.com/en-us/research/project/project-bra...

I got the TPUs wrong and Brainwave right.

What they certainly are not is RISC-V.


The guy who designed TPU works on RISC-V and is a fan of RISC-V. And on of the reason was his experience with TPU, he realizes that custom hardware is really great but a more generally programmable approach would have advantages.

That's exactly what the RISC-V Vector extension was designed for. When they started TPU RISC-V wasn't ready for something like that and the Vector extension was at best an idea.

Saying 'RISC-V' is bad for ML because XY product doesn't use it, is a terrible argument in general. Over the next 10 years we will have literally 1000s of different things that make AI fast. All of those come with tradeoffs.

RISC-V as a base chip for many system is clearly a great fit, even on FPGAs. The RISC-V Vector extension is an excellent for many AI problems and many companies are currently working on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: