User Tools

Site Tools


sec-firmwareai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
sec-firmwareai [2024/01/18 03:04] nuicisesec-firmwareai [2024/01/18 03:05] (current) – [Learning Objectives:] nuicise
Line 7: Line 7:
  
 By the end of the course, the participants will be able to: By the end of the course, the participants will be able to:
-   • Train a neural network using Keras and TensorFlow +  * Train a neural network using Keras and TensorFlow 
-   • Convert a trained neural network into FPGA firmware using HLS4ML +  Convert a trained neural network into FPGA firmware using HLS4ML 
-   • Optimize a neural network and its resource utilization for deployment onto an FPGA 4. Deploy a neural network onto an FPGA board +  Optimize a neural network and its resource utilization for deployment onto an FPGA 4. Deploy a neural network onto an FPGA board 
-   • Introduction to Machine Learning on FPGAs +  Introduction to Machine Learning on FPGAs 
-       • Basic overview of FPGAs and their underlying structure. Rationale, Motivation, and trade-offs of using FPGAs for Machine Learning. +       Basic overview of FPGAs and their underlying structure. Rationale, Motivation, and trade-offs of using FPGAs for Machine Learning. 
-       • Overview of common Neural Network acceleration techniques and hardware, including GPU Acceleration, Systolic Arrays, and Dataflow Architectures. +       Overview of common Neural Network acceleration techniques and hardware, including GPU Acceleration, Systolic Arrays, and Dataflow Architectures. 
-       • Considerations and parameters to tune when implementing a neural network on a FPGA. Including parallelism and the trade-off between latency and resource utilization, arbitrary bitwidth numerical representations, and potential resource bottlenecks. +       Considerations and parameters to tune when implementing a neural network on a FPGA. Including parallelism and the trade-off between latency and resource utilization, arbitrary bitwidth numerical representations, and potential resource bottlenecks. 
-   • Using HLS4ML to convert a Neural network into FPGA Firmware +   Using HLS4ML to convert a Neural network into FPGA Firmware 
-       • Introduction to using the HLS4ML package, basic configuration, and neural network to firmware conversion. A hands-on walk-through of the model conversion, firmware synthesis, and bitfile generation workflow for a simple physics task. +       Introduction to using the HLS4ML package, basic configuration, and neural network to firmware conversion. A hands-on walk-through of the model conversion, firmware synthesis, and bitfile generation workflow for a simple physics task. 
-       • Tuning the details of the implemented model, such as parallelism and precision, performing Post-Training Quantization, and determining the desired implementation strategy. +       Tuning the details of the implemented model, such as parallelism and precision, performing Post-Training Quantization, and determining the desired implementation strategy. 
-       • Advanced configuration of implementation parallelism, parameter precision, and implementation strategy. Overview of the different these values at different configuration scopes. +       Advanced configuration of implementation parallelism, parameter precision, and implementation strategy. Overview of the different these values at different configuration scopes. 
-       • Simulation, profiling and evaluation of a model before firmware generation. +       Simulation, profiling and evaluation of a model before firmware generation. 
-   • Optimizing your neural network for deployment onto an FPGA +   Optimizing your neural network for deployment onto an FPGA 
-       • Overview of common model compression techniques, including Quantization Aware Training (QAT), Parameter Pruning, and Knowledge Distillation. +       Overview of common model compression techniques, including Quantization Aware Training (QAT), Parameter Pruning, and Knowledge Distillation. 
-       • A survey of commonly used Quantizaton Aware Training tool kits, their differences, and when/how to use them. Plus, an example of performing QAT on a model, and how to configure and convert a quantized model using hls4ml +       A survey of commonly used Quantizaton Aware Training tool kits, their differences, and when/how to use them. Plus, an example of performing QAT on a model, and how to configure and convert a quantized model using hls4ml 
-       • Example and walkthrough of model pruning, and how to configure and convert a pruned model with hls4ml. Also an example and discussion of how to combine quantization and pruning, its effects on a model, and an example of converting a quantized and pruned model with hls4ml. +       Example and walkthrough of model pruning, and how to configure and convert a pruned model with hls4ml. Also an example and discussion of how to combine quantization and pruning, its effects on a model, and an example of converting a quantized and pruned model with hls4ml. 
-   • Deployment and the PYNQ software stack +   Deployment and the PYNQ software stack 
-       • Overview of Xilinx’s “PYNQ” Python API and OS Image, basic usage of PYNQ to interact with, manage, and configure supported devices, such as Xilinx’s “ZYNQ/ZYNQ ULTRASCALE+” and ALVEO devices, through a Python and Jupyter Notebook interface. +       Overview of Xilinx’s “PYNQ” Python API and OS Image, basic usage of PYNQ to interact with, manage, and configure supported devices, such as Xilinx’s “ZYNQ/ZYNQ ULTRASCALE+” and ALVEO devices, through a Python and Jupyter Notebook interface. 
-       • A discussion and overview of developing/supporting the PYNQ API when building a firmware image, its design requirements and considerations, and examples of more complex firmware images with a neural network built into them. +       A discussion and overview of developing/supporting the PYNQ API when building a firmware image, its design requirements and considerations, and examples of more complex firmware images with a neural network built into them. 
-       • Deployment of a hls4ml generated firmware image onto a TUL Pynq-Z2 development board, running neural network inferences on the FPGA accelerator via the “PYNQ” API and OS, and an example of running the same project on an “ALVEO” device.+       Deployment of a hls4ml generated firmware image onto a TUL Pynq-Z2 development board, running neural network inferences on the FPGA accelerator via the “PYNQ” API and OS, and an example of running the same project on an “ALVEO” device.
  
 ===== Prerequisites: ===== ===== Prerequisites: =====
sec-firmwareai.1705572251.txt.gz · Last modified: 2024/01/18 03:04 by nuicise