PDP-11๐Ÿš€
453 subscribers
112 photos
28 files
144 links
AI Hardware & Domain Specific Computing

#FPGA #ASIC #HPC #DNN

@vconst89
Download Telegram
Sorry guys, this channel is transforming into link-collection feed, but I promise to be back on track soon with brief summaries :)

https://www.electronicdesign.com/industrial-automation/article/21136402/smartnic-architectures-a-shift-to-accelerators-and-why-fpgas-are-poised-to-dominate
Bluspec Haskell is an open-source framework, yet another High Level Hardware Description Language, but now based on Haskell

Jonathan Ross, hardware AI startup Groq founder and ex-Google TPU developer, claims that it was used on initial stages of TPU design. It looks like Groq is also actively using it
https://www.linkedin.com/in/jonathan-ross-12a95156/

Bluespec research note
https://arxiv.org/pdf/1905.03746.pdf

The latest version of bluespec Compiler can be found here
https://github.com/B-Lang-org/bsc

And here's the tutorial
https://github.com/rsnikhil/Bluespec_BSV_Tutorial/tree/master/Reference
Syntiant, a startup developing AI edge hardware for voice and sensor solutions, today closed a $35 million round. CEO Kurt Busch says the funds will be used to ramp up production throughout the remainder of 2020.

The one million parts shipped to date includes both NDP100 and NDP101 parts since the companyโ€™s first production orders in September 2019. Both are manufactured at UMC in Singapore

Syntiantโ€™s NPD100 and NPD101 processors measure about 1.4 millimeters by 1.8 millimeters and can run models with over half a million parameters. Packing a general-purpose ARM Cortex-M0 processor paired with 128KB of RAM, the chips consume less than 140 microwatts and enable onboard firmware security and authentication, keyword training, and up to 64 output classifications.

The NPD100 and NPD101 โ€” which initially targeted performance of around 20 TOPS (trillion floating-point operations) per watt โ€” use hundreds of thousands of flash memory NOR cells that read and write data one word or byte at a time. The processor-in-memory architecture was proposed by CTO Jeremy Holleman, a researcher at the University of North Carolina in Charlotte, as far back as the 2014 International Solid-State Circuits Conference. Syntiant asserts that the architecture is ideal for executing massively parallel operations in deep learning at low power.

According to a report published by Meticulous Research, the speech and voice recognition hardware market is expected to reach $26.8 billion by 2025

Article on the EET
Syntiant Webpage
NDP100 Overview
Google supercomputer, 4096 TPU v3 chips in a 2D torus topology, wins MLPerf benchmark contest, the result table was published published on 29th of July.
Note also that it's probably the first TPU v4 public release. It currently shows with a pretty moderate result - 3.5 times slower than a winner.
But it has 8 times less chips (256) , so it looks like Google will beat it's own record very soon, whenever they upscale TPUv4 supercomputer up to 1024 chips.
โ“ Can I run my Neural Network on the FPGA?
โ“ Does Vivado HLS run my CPP code on the FPGA?
โ“ What is difference between OneAPI and Intel OpenCL?
โ“ Vitis - is it a sort of HLS for VIvado, isn't it?
๐Ÿค”

There are two main FPGA vendors today - Xilinx and Intel. Both of them released dozens of different software developer oriented tools during the last couple of year. All of them promises the same - run you software on FPGA in a few clicks.

๐Ÿ˜• It's a bit tricky to get through all these marketing papers and understand the role of each new thing.

๐Ÿ“– Here's the paper which should navigate you through all these applications and IDEs.

๐Ÿ“ Feel free to comment it right in the Google docs
The latest paper by David Patterson & Google TPU team reveals details of the world most efficient and one of the most powerful supercomputers for DNN Acceleration - TPU v3. The one which was used to train BERT.
We recommend that you definitely read the full text, but here are insights and tldr highlights

Key Insight:
The co-design of an ML-specific programming system (TensorFlow), compiler (XLA), architecture (TPU), floating-point arithmetic (Brain float16), interconnect (ICI), and chip (TPUv2/v3) let production ML applications scale at 96%โ€“99% of perfect linear speedup and 10x gains in performance/ Watt over the most efficient general-purpose supercomputers.

More highlights:

๐Ÿฃ๐Ÿค๐Ÿ” Three generations
There are 3 generations of TPU now released, TPU v1 used fixpoint arithmetic and was used for inference only. TPU v2 and v3 operate in floating-point and used for training. TPU v4 results were presented in MLPerf summer release, but there is no public information available. The TPU architecture differs from CPU with
โ–ช๏ธ Two Dimensional array processing units (instead of 1D vector SIMDs in CPU)
โ–ช๏ธNarrower data (8-16 bits)
โ–ช๏ธ Drop complex CPU features - caches and branch prediction

๐Ÿฎ๐Ÿคœ๐Ÿค Fewer cores per chip (two oxen vs 1024 chickens)
NVidia put thousands of CUDA cores inside their chip. TPU v3 has only 2 TensorCores per chip. It's way easier to generate a program for 2 beefier cores than to swarm of wimpier cores.
Each TensorCore includes the following units:-
โ–ช๏ธICI(Inter Core Interconnects) - connect core across different chips-
โ–ช๏ธHBM, stacked DRAM on the same interposes substrate-
โ–ช๏ธCore Sequencer - manages instructions and performs scalar operations-
โ–ช๏ธVector Processing Unit, performs vectors operation for 1D and 2D vectors-
โ–ช๏ธMatrix Multiply Unit (MXU)

๐Ÿฑ๐Ÿถโ“ From inference to training chip
Key challenges on the way from inference chip V1 to training hardware V2
โ–ช๏ธ Harder parallelization
โ–ช๏ธ More computation
โ–ช๏ธ More memory
โ–ช๏ธ More programmability
โ–ช๏ธ Wider dynamic range of data

โœ‚๏ธ๐Ÿงฎโœ‚๏ธ Brain Float
IEEE FP16 and FP32 use (1+8+23) and (1+5+7) bits for the sign, exponent, and mantissa values respectively. In practice, DNN doesn't need mantissa precision of FP32, but the dynamic range of FP16 is not enough. Using of FP16 also requires loss scaling.
The compromised bf16 keeps the same 8 bits for exponent, as FP32, but reduced mantissa - only 7 bits instead of 23.
BF16 delivers reducing space usage and power consumption with no loss scaling in software required.

๐Ÿฉ๐Ÿงฌโšก๏ธ Torus topology and ICI
TPU v1 was an accelerator card for CPU 'based computer. TPUv2 and v3 are building blocks of the supercomputer. Chips connected with ICI interface, each running at ~500Gbits/s. ICU enables direct connection between chips, so no need of any extra interfaces. GPU/CPU based supercomputers have to apply NVLink and PCI-E inside computer chase and InfiniBand network and switches to connect them.
Chips in TPUv2 and v3 clusters are connected in 2D Torus topology (doughnut ) and achieve an unbelievable linear scale of performance growth with increasing of chips number.


๐Ÿ› โš™๏ธ๐Ÿ–ฅ XLA compiler (to orchestrate them all)
TF programs are graphs of operations, where tensor-arrays are first-class citizens. XLA compiler front-end transforms the TF graph into an intermediate representation, which is then efficiently mapped into selected TPU (or CPU/GPU) architectures. XLA maps TF graph parallelism across hundreds of chips, TensorCores per chip, multiple units per core. XLA provides precise reasoning about memory use at every point in the program.
Young XLA compiler has more opportunities to improve than a more mature CUDA stack.


๐ŸŒฒ๐Ÿฐ๐ŸฆŠ Green Power (Forest animals approves)
TPU v3 supercomputer already climbed on the 4th row of TOP500 ranking, but what is remarkable - it demonstrates an overwhelming 146.3 GFLops/Watt performance. The nearest competitor has 10 times and lower number.

Original Paper
A Domain Specific Computer for training DNN
๐Ÿค”๐ŸŒณ๐Ÿƒโ€โ™‚๏ธ Decision trees accelerating

One would be surprised, but DNNs do not exhaust the list of ML algorithms. In fact, few businesses can find an application of CV or NLP, few have a significant amount of speech or photo data, where DNN shows SOTA results.

But most of them have a huge amount of irregular table data - financial market prices, customer data, base station activity logs, or windmills breakdown statistics.
And that's where decision trees get up on stage. There are 3 major frameworks on the market nowadays, who provides frameworks for training ensembles of decision trees with gradient boosting. It's XGboost, CatBoost and LightGBM. Read here to learn more about them here or here

Due to the specific nature, good match between algorithm and hardware organizations, decision trees can be significantly accelerated on FPGA.
We will cover 2 stories here

๐ŸŒ๐Ÿ‡ฉ๐Ÿ‡ช๐Ÿฅจ Xelera Decision Tree Acceleration
Germany-based startup Xelera offers FPGA devices as a hardware backend for decision trees inference acceleration. The company claims 700x both speedup and latency improvement. FPGA results were estimated on the cloud AWS F1 FPGA instances and on the Xilinx Alveo U50.
The secret as to why FPGAs perform so well on this class of workloads is their unique memory architecture, which consists of thousands of independent blocks of on-chip memory. This memory is not only highly parallel a key difference to the GPU memory is that it can handle highly parallel, irregular memory accesses very well.

๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ”ฌ๐ŸŽณ FPGAs for Particles classification
HLS4ML is an open-source framework from the Cornell University team, who is working in CERN, where FPGAs are used for trigger condition detection or particle classification.
HLS4ML generates an HLS description of the ML algorithm, which you may feed to the HLS synthesis tool (i.e. Vivado HLS) to generate an FPGA configuration file.

The recent paper describes how to use HLS4ML to generate FPGA firmware and host software for decision trees acceleration.

Taking as an example a multiclass classification problem from high energy physics, we show how a state-of-the-art algorithm could bedeployed on an FPGA with a typical inference time of 12 clock cycles (i.e., 60 ns at a clock frequency of 200 MHz)
๐Ÿค“๐Ÿค“๐Ÿค–๐Ÿค“๐Ÿค“
How PCIe 5 and its Smart Friends Will Change Solution Acceleration

Nice article by Scott Schweitzer, Xilinx

Keynotes:

๐ŸฅฆPCI-E Gen5 offers not only throughput bandwidth doubling, but also Compute Express Link (CXL) and a Cache Coherent Interconnect for Accelerators (CCIX) promise to create efficient communication between CPUs and accelerators like SmartNIC or co-processors.

๐Ÿ•ธ CCIX configurations include direct attached, switched topologies, and hybrid daisy chain.  it can take memory from different devices, each with varying performance characteristics, pool it together, and map it into a single non-uniform memory access (NUMA) architecture. Then it establishes a Virtual Address space, enabling all of the devices in this pool access to the full range of NUMA memory

๐Ÿฅฅ SmartSSDs, also known as computational storage, place a computing device, often an FPGA accelerator, alongside the storage controller within a solid-state drive. This enables the computing device in the SmartSSD to operate on data as it enters and exits the drive, potentially redefining both how data is accessed and stored.

๐Ÿ‘ฉโ€๐Ÿ”ฌ SmartNICs, are a special class of accelerators that sit at the nexus between the PCIe bus and the external network. While SmartSSDs place computing close to data, SmartNICs place computing close to the network

๐Ÿ‘ฉโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ง SmartNICs and DPUs (data processing units) that leverage PCIe 5 and CXL or CCIX will offer us richly interconnected accelerators that will enable the development of complex and highly performant solutions
๐Ÿ‡จ๐Ÿ‡ญ๐Ÿ‡น๐Ÿ‡ท Onur Mutlu, world leading researcher of computer architectures, (SAFARI group, ETH) published short thought provoking keynotes
Intelligent Architectures for Intelligent Machines 
Submitted on 13 Aug 2020
 
Highlights:
 ๐Ÿ’กData access is still a major bottleneck.

๐Ÿคนโ€โ™‚Current the processor-centric design paradigm is  a dichotomy between processing and memory/storage: data has to be brought from storage and memory units to compute units, which are far away from the memory/storage.  This processor-memory dichotomy leads to large amounts of data movement across the entire system, degrading performance and expending large
amounts of energy.
 
Modern Architectures are poor at:
โšก๏ธDealing with data: they are designed to mainly store and move data, as opposed to actually compute on the data
โšก๏ธTaking advantage of vast amounts of data and metadata  available to them during online operation and over time.
โšก๏ธExploiting different properties of application data. They are designed to treat all data as the same
 
Intelligent architecture
Intelligent architecture should handle (i.e., store, access, and process) data well.
 
Key  principles:
๐Ÿ™Data-centric: minimizing data movement and maximizing the efficiency with which data is handled,
๐ŸฆDatadriven: the architecture should make datadriven, self-optimizing decisions in its components
๐Ÿ Data-aware":  the architecture should make datacharacteristics-aware decisions in its components and across the entire system
 
Read the full keynote here
๐Ÿ‹
How to Evaluate Deep Neural Network Processors
Vivienne Sze

Why FOPS/W metric is not enough
Common metric for hardware efficieny measurement is FOPS/W (floating-point operations per second per watt) or TOPS/WTerra FOPS/W. However, TOPS/W alone is not enough. It often goes along with, the peak performance in TOPS, which gives the maximum efficiency since it assumes maximum utilization and thus maximum amortization of overhead. However, this does not tell the complete story because processors typically do not operate at their peak TOPS and their efficiency degrades at lower utilization. Following 6 metrics and it's combination must be considered:

๐ŸŽฏ1. The accuracy determines if the system can perform the given task. To evaluate it, several benchmark were proposed, among them MLPerf.

โฑ2. The latency and throughput determine whether it can run fast enough and in real time. Throughput is the amount of inferences per second and latency is the time period between the input sample arrival and generation of the result. Batching technique improves throughput, but degrades latency. Thus, achieving low latency and high throughput simultaneously can sometimes be at odds depending on the approach, and both metrics should be reported

๐Ÿ‹๐Ÿผ3. The energy and power consumption primarily dictate the form factor of the device where the processing can operate. Memory read and write operations are still the main consumers of the power, not an arithmetic. 32b DRAM read takes 640 pH and 32b FP multiply 0.9pJ.

๐Ÿค‘4. The cost, which is primarily dictated by the chip area and external memory BW requirements, determines how much one would pay for the solution. Custom DNN processors have a higher design cost (after amortization) than off-the-shelf CPUs and GPUs. We consider anything beyond this, e.g., the economics of the semiconductor business, including how to price platforms, to be outside the scope of this article. Considering the hardware cost of the design is important from both an industry and a research perspective as it dictates whether a system is financially viable

๐Ÿ5. The flexibility determines the range of tasks it can support. To maintain efficiency, the hardware should not rely on certain properties of the DNN models to achieve efficiency, as the properties of DNN models are diverse and evolving rapidly. For instance, a DNN processor that can efficiently support the case where the entire DNN model (i.e., all of the weights) fits on chip may perform extremely poorly when the DNN model grows larger, which is likely, given that the size of DNN models continues to increase over time;

๐Ÿšก6. The scalability determines whether the same design effort can be amortized for deployment in multiple domains (e.g., in the cloud and at the edge) and if the system can efficiently be scaled with DNN model size.

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ7. Interplay Among Different Metrics
All the metrics must be accounted for to fairly evaluate the design tradeoffs

๐ŸญCase 1. Tiny binarized NN architecture, with very low power consumption, high throughput, but unacceptable accuracy.
๐Ÿ˜Case 2. Complete floating point DNN chip, high throughput and moderate chip power consumption. But it's pure arithmetic chip with MACs and all the data read/write/storage is performed outside chip. So total system power consumption will very high.
๐Ÿ”ฎ๐Ÿง™
Five trends that will shape the future semiconductor technology landscape
Sri Samavedam, senior vice president CMOS technologies at imec

โคด๏ธTrend 1: Mooreโ€™s Law will continue, CMOS transistor density scaling will roughly continue to follow Mooreโ€™s Law for the next eight to ten years.

โคต๏ธTrend 2: ... but logic performance improvement at fixed power will slow down
Node-to-node performance improvements at fixed power โ€“ referred to as Dennard scaling โ€“ have slowed down due to the inability to scale supply voltage. Researchers worldwide are looking for ways to compensate for this slow-down and further improve the chipโ€™s performance

๐ŸŽŽTrend 3: More heterogeneous integration, enabled by 3D technologies
We see more and more examples of systems being built through heterogeneous integration leveraging 2.5D or 3D connectivity, like SoC FPGA, HBM, CPU and GPU on the same interposer.

๐Ÿ—œTrend 4: NAND and DRAM being pushed to their limits
Emerging non-volatile memories on the rise. The emerging non-volatile memory market is expected to grow at >50% compound annual growth rate โ€“ mainly driven by the demand for embedded magnetic random access memory (MRAM) and standalone phase change memory (PCM).

โŒš๏ธTrend 5: Spectacular rise of the edge AI chip industry
With an expected growth of above 100% in the next five years, edge AI is one of the biggest trends in the chip industry. As opposed to cloud-based AI, inference functions are embedded locally on the Internet of Things (IoT) endpoints that reside at the edge of the network, such as cell phones and smart speakers

Read the full text
๐ŸŽ‰๐ŸฅณDataFest2020 will take place this weekend ๐Ÿฅณ๐ŸŽ‰

The OpenDataScience community presents DataFest2020 , that will happen the following weekend, 19-20th of September.
This year's festival will be fully online.
Please, check the festival landing page and you will find a list full of fascinating sections.
Each section includes several tracks - videos and reading materials - and interactive chat rooms for discussions and Q&A.

@PDP11ML channel and our good friends will present Domain Specific Hardware section.
We will talk about market trends, technical details, share success stories and highlight SOTA solution.
Hardware section content and time schedule will be presented here

Participation is free of charge, all tracks in our sections are in English and you are very welcome to join us.

See you soon!
Why Nvidia wants ARM, by WSJ

๐Ÿค“ Huangโ€™s Law, Silicon chips that power artificial intelligence more than double in performance every two years.

๐Ÿ“ŸAI Goes from the cloud to the edge (dishwashers, smartphones, watches, hoovers)

๐Ÿ—œ ARM develops ultra-low-power CPUs and ML cores

๐Ÿ’กThis movement of AI processing from the cloud to the โ€œedgeโ€โ€”that is, on the devices themselvesโ€”explains Nvidiaโ€™s desire to buy Arm, says Nexar co-founder and CEO Eran Shir.

๐Ÿ‡จ๐Ÿ‡ณThe pace of improvement in AI-specific hardware will make possible a range of applications both utopian and dystopian,

๐Ÿ“ฒUses of mobile AI are multiplying, in phones and smart devices ranging from dishwashers to door locks to lightbulbs, as well as the millions of sensors making their way to cities, factories and industrial facilities. And chip designer Arm Holdingsโ€”whose patents Apple, among many tech companies large and small, licenses for its iPhone chipsโ€”is at the center of this revolution.

๐ŸœOver the last three to five years, machine-learning networks have been increasing by orders of magnitude in efficiency, says Dennis Laudick, vice president of marketing in Armโ€™s machine-learning group. โ€œNow itโ€™s more about making things work in a smaller and smaller environment,โ€ he adds.

Source: WSJ
โ€‹โ€‹Intel: IoT Enchanced processor increase Perfomace, AI, Security

๐ŸŒถMotivation: By 2023, up to 70% of all enterprises will process data at the edge.

๐Ÿ—AI-inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with vector neural network instructions (VNNI) built in. With Intelยฎ Time Coordinated Computing (Intelยฎ TCC Technology) and time-sensitive networking (TSN) technologies, 11th Gen processors enable real-time computing demands

๐Ÿ‹๐Ÿฝโ€โ™‚๏ธSoftware Tools: Edge Software Hubโ€™s Edge Insights for Industrial and the Intelยฎ Distribution of OpenVINOโ„ข toolkit

๐Ÿฆ€Use cases:
Industrial, Retail, Healthcare, Smart City, Transportation
NIVIDIA DPU
by
Forbes

โšก๏ธDomain specific processors (accelerators) are playing greater roles in off-loading CPUs and improving performance of computing systems

๐Ÿค–The NVDIA BlueField-2 DPU( Data Processor Units), a new domain specific computing technology, is enabled by the companyโ€™s Data-Center-Infrastructure-on-a-Chip Software (DOCA SDK) Smart NIC. . Off-loading processing to a DPU can result in overall cost savings and improved performance for data centers.

๐Ÿ‘ฌ NVIDIAโ€™s current DPU lineup includes two PCIe cards BlueField-2 and BlueField-2X DPUs. BlueField-2 based on ConnectXยฎ-6 Dx SmartNIC combined with powerful Arm cores. BlueField-2X includes all the key features of a BlueField-2 DPU enhanced with an NVIDIA Ampere GPUโ€™s AI capabilities that can be applied to data center security, networking and storage tasks

Read more about DPUs:
- Product page
- Mellanox product brief
- Servethehome
- Nextplatform
- Nextplatform