How to become a NVIDIA-Certified Associate: Generative AI LLMs (NCA-GENL)

How to become a NVIDIA-Certified Associate: Generative AI LLMs (NCA-GENL)

When Nvidia announced its Generative AI certification tracks at GTC in March 2024 — the LLM-focused NCA-GENL, as well as the multimodal NCA-GENM — it was clear to me that I wanted to give it a shot, for various reasons:

  • My Google Cloud-certified knowledge from 2020 and 2021 felt outdated (see summaries of my ML Engineer, Data Engineer and Architect experiences); the courses had e.g. covered Recurrent Neural Networks (RNNs) for language and Convolutional Neural Networks (CNNs) for vision use cases, but had not touched today’s critical Transformers architecture and Generative Adversarial Networks (GANs);
  • As much as Nvidia has been all over the news for its dramatic stock surges, I had very limited insights into their hardware and software stack, and how to best leverage their AI products versus (or together with) hyperscalers and the ISV ecosystem;
  • I am currently searching for a new job in the Generative AI space, and wanted to make sure to be more than an applicant that throws around buzzwords.


The NCA-GENL certification syllabus seemed more relevant to me, so I started studying mid-March and successfully passed the exam at the end of April. During my studies, I did not encounter any sample questions or blogs from people that passed or failed the certification. The official information on Nvidia’s Deep Learning Institute (DLI) homepage was not very helpful, and support tickets with my questions remained unanswered. So I wrote this article to shed some light on my preparation, the required and tested knowledge, and how the exam is structured. All views are subjective, and all visualizations were created by myself based on the free-of-charge and public Nvidia material, so please take the correctness of anything inside this article with a grain of salt.


1) Preparation Material

Nvidia recommends a variety of introductory and deep dive courses, some of which are free-of-charge, others of which are instructor-led and rather pricey.

Article content

On the introductory courses, I did the first but skipped the second and third, the reason being that I had done DeepLearning.AI 's Machine Learning Specialization a year earlier — which was an excellent course that I can highly recommend. For deep dives, I did courses six and seven. Based on the course outlines of the costly Nvidia courses (four, five, eight and nine), I substituted them with the following free alternatives:


The following webinars are recommended by Nvidia — the first, second and third are pretty redundant, with the second being the most helpful; the fourth webinar certainly has the best content and includes LangChain code for making the concepts tangible:

  1. What AI teams need to know
  2. Building Generative AI Applications for Enterprise Demands
  3. Building AI chatbots using RAG
  4. The Fast Path to Developing with LLMs
  5. Running your own LLM


In addition, there are a ton of articles that summarize (and sometimes extend) the course and webinar content. I liked and can recommend the following:

Article content

Article content

I spent the lion share of my exam preparation time on the LLM-specific Nvidia tech stack and its related products:

Article content

  • DGX and Base Command, in simple words “the deep learning hardware stack, or IaaS” — see DGX’s Solution Overview, Datasheet and Base Command’s Solution Overview, Video and Datasheet
  • AI Enterprise, in simple words “the AI software stack, or PaaS”; as it includes hardware acceleration and data science tooling, it is more comprehensive than what hyperscalers like Google Cloud, MS Azure and AWS offer on their clouds; it helped me to think of Nvidia AI Enterprise more as a competitor to Hugging Face, who offers purely open-source building blocks to all company sizes; while Nvidia’s AI Enterprise platform seems to offer both open-source building blocks and integrates with proprietary Nvidia stuff, solving for complex enterprise requirements — see Product Page and Solution Overview
  • NeMo, which Nvidia calls AI workflows, but in simple words I thought of as “the end-to-end MLOps framework exclusively for LLMs, or SaaS” (next to e.g. Picasso for video, Metropolis for smart cities, Riva for speech, Drive for autonomous vehicles, Merlin for recommendation systems, Clara for healthcare, Morpheus for cybersecurity, Isaac for Robotics and cuOpt for logistics) — see NeMo’s Product Page, Solution Overview and Video, and in particular deep dives on:


In addition, I made extensive use of Nvidia’s glossary (e.g. LLMs explained and Deep Learning) and preparation material for the AI in the Data Center certification.


My best friend in the whole knowledge acquisition process was GenAI-native search engine Perplexity . Whenever I felt like I had (mis-)understood concepts, I ran it through Perplexity as my sparring partner. Every now and then I wanted to throw in the towel and give up — in these moments Perplexity got me back on track. Gemini and ChatGPT were also helpful, primarily for coming up with creative analogies to Nvidia products — but I am coming out of this whole experience as a huge Perplexity fan.


2) The Exam

The exam is hosted on the Talview Secure Browser, a seamless experience with checks by proctored agents. I enjoyed the logistics significantly more than with Google Cloud’s Kryterion, with which I have had issues every single time. Nvidia gives you a dry run of the whole Talview process once you buy the exam voucher, so you can have peace of mind on the day of the exam. The exam onboarding was indeed quick (5 minutes maximum). The only issue was that my passport screenshot was not readable and I had to try various times. There is an option to upload a JPG file instead, so having this file handy on your desktop is a good idea. Once you pass the logistics onboarding, you are not allowed to start before your scheduled time  — in my case it meant waiting for 15 minutes.

Overall, Nvidia presents you with 50 multiple choice questions, and provides you with only one hour to answer them all. Roughly five of the questions had five possible answers, and more than one was correct. The remaining questions usually had four options to choose from, with a handful of questions having only three options.

The proctored agent checked in on me a few times during the exam, which was unfortunate for various reasons: first, it interrupts your flow; second, it came with a doorbell sound and made my confused dog bark for no reason. I finished only 10 minutes ahead of time, while I usually had at least a third of my time left in the Google Cloud certifications. 

So, what about the difficulty? I thought this was way, way harder than anticipated:

  • Around 10% of the questions were about general understanding of deep learning, including e.g. support vector machines (SVM), exploratory data analysis (EDA) and activation and loss functions.
  • Another 10% aimed at the transformer architecture, particularly the topics of encoding, decoding and attention  — this is definitely a must in your preparation.
  • 40% were focused on how to work with models in the NLP and LLM space, and a lot of the topics caught me off guard. I remember questions on text normalization techniques such as stemming and lemmatization; understanding the high-level mechanics of embedding techniques (e.g. WordNet vs. word2vec); advantages of Python libraries such as spaCy; NLP evaluation frameworks such as GLUE; and interoperability standards such as ONNX.
  • Only 40% covered content that I had actively studied. Customization and RAG summed up to a combined five questions. TensorRT and Triton Inference Server came up very frequently, and there were numerous questions about optimization techniques for GPUs, CPUs and memory in the Nvidia stack. I do not recall any directly related product questions on DGX, AI Enterprise and NeMo  — the focus was broader, such as grasping how cuDF, cuML and the NGC catalog are used in practice.


3) Final remarks

In case you are interested in my preparation notes, you can find them here. These encompass summaries of the free-of-charge and public sources mentioned above. If you have any additional questions, do not hesitate to ping me and I’ll be happy to help wherever I can.


All views are subjective, and all visualizations were created by myself based on the free-of-charge and public Nvidia material. Please the correctness of anything inside this article with a grain of salt.

Vishal Pratik

Senior Engineering Manager at Nordstrom - Supply chain

3mo

hello Rolf Siegel - Do you recommend any practice test for this exam ?

Like
Reply
Agustín Schuster

AI Business Transformation Executive | Accelerating Enterprise Adoption | Financial Services Leader

5mo

Great insights. Thank you!

Like
Reply
Anitha Acharya

Software Developer at BNY

5mo

Your guidance really helped a lot. Finally cleared my NCA GENL in my 1st attempt. I would also recommend practice tests from Skillcertpro https://skillcertpro.com/product/nvidia-nca-generative-ai-with-llms-exam-questions/ because they are closely resembled the main exam format. I really felt confident after doing 4 practice sets of these. They also gave instructor notes which helped a lot for revision during the final days before exam. Overall preparation took 2 weeks for me. Highly recommended.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics