Groups | Search | Server Info | Login | Register


Groups > sci.electronics.design > #734866

Re: AI for FPGA design

From Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De>
Newsgroups sci.electronics.design, comp.arch.fpga
Subject Re: AI for FPGA design
Date 2025-08-11 12:29 +0200
Organization A noiseless patient Spider
Message-ID <9a17f136-ba5a-4585-4fe9-aae1ae614be7@irrt.De> (permalink)
References <64le9k1vou92tug582k53qhfijm118r68k@4ax.com> <ff700ae7-08a7-bf40-f29a-69c44bd31ae7@irrt.De>

Cross-posted to 2 groups.

Show all headers | View raw


[Multipart message — attachments visible in raw view] - view raw

I wrote yesterday:
"A lady claims via LinkedIn that an AI service produced a bad Verilog 
code, so
she concluded that an AI is not going to threaten her job, and I wrote to her
that she deserves a refund."


Dear all:

"User Agreement
Effective on November 20, 2024
[. . .]
8.2. Don’ts
You agree that you will not:
[. . .]
4. Copy, use, display or distribute any information (including content) 
obtained from the Services, whether directly or through third parties 
(such as search tools or data aggregators or brokers), without the 
consent of the content owner (such as LinkedIn for content it owns);"
says
HTTPS://WWW.LinkedIn.com/legal/user-agreement#dos

I asked Ms. Sharada Yeluri for permission to republish from that LinkedIn 
thread. She likes this question, so I republish . . .

"Sharada Yeluri
    • 3:e+Premium • 3:e+
Engineering Leader
6 mån [months ago i.e. circa February 2025] • Redigerad [Swedish for 
edited] •  6 månader sedan [months ago] • Redigerad • Synligt för alla, på 
och utanför LinkedIn

Följ [Follow]

ChatGPT o1 with advanced reasoning… excels at competition-level math, 
solves PhD-level science questions, tackles complex multi-step problems 
with chain-of-thought reasoning… The list goes on.

Curious about its prowess, I decided to test its ability to develop 
Verilog RTL code for a functional block that’s commonly found in most 
networking and XPU ASICs: a buffer manager. After all, they charge $200 
per month, so there must be some magic.

Grudgingly, I paid the fee and posed a challenge: Build a buffer manager 
for a 16K entry-deep buffer that is 128 bits wide, shared dynamically 
between 256 queues. The module should sustain one enqueue and one dequeue 
every cycle without stalls... Use SRAMs for linked list structures, and 
yes, the SRAMs have two-cycle read latencies...

I know there aren’t many open-source Verilog designs for hashtag#ChatGPT 
to learn from. Still, with its "advanced" reasoning abilities, I expected 
a decent output.

It churned out an RTL module and a Verilog test bench—points for effort. 
When I pointed out how the design could not handle back-to-back dequeues 
from the same queue, it gave up too quickly and declared there was no way 
to design it without stalling the inputs. I nudged it towards approaches 
like doubly linked lists or ping-pong buffers. It understood the concepts 
and even explained them back to me, like a student trying to impress a 
professor... 😊

When the RTL didn’t give the correct results, I directly fed back the 
simulation results from its test bench for it to analyze. After a few 
feedback iterations, the enqueues started working—progress!

The dequeues, however, remained stubbornly broken. Hoping to simplify 
things, I relaxed the constraints, allowing a 5-cycle gap. No luck... 
Instead, ChatGPT decided the simulator was wrong—an audacious claim for 
an AI model still learning to count pipeline stages.

Eventually, I debugged the RTL myself and found the culprit - a typo. 
After fixing it, the dequeues worked. However, the design still lacked 
hazard checks for back-to-back dequeues, and after an hour of trying to 
teach pipeline bypasses, I called it quits.

The good news? 🤔

While the ChatGPTs and copilots might take over sw engineer jobs, they 
are far from snatching jobs from ASIC engineers… 😊

They may argue about the lack of open-source Verilog for AI models to 
train on - chip designs are locked away tighter than bank vaults. But if 
ChatGPT can solve Olympiad math through reasoning, why does reasoning 
through pipeline hazards feel like rocket science to it? 🤔

The pace of innovation needed to achieve hashtag#AGI is directly tied to 
advancements in XPUs and the networking hardware they rely on. If AI 
companies are serious about accelerating AGI development, we need models 
that can reason through complex chip design problems and help compress 
design cycles. After all, these chips are the foundation for their AGI 
dreams.

hashtag#OpenAI team, now that the Olympiad math is behind you, how about 
the chip design challenge next?"

"Sharada Yeluri

Författare [LinkedIn Original Poster]
Engineering Leader
6 mån

Andreas Olofsson, 100% agree. But again, the idea behind reasoning models 
is that they work well even in the absence of tons of data during 
training. The model seems to understand all the Verilog syntax and can 
spit out hundreds of lines of code that compiles well. when I explain 
pipelining concepts, it understands and repeats back it's interpretation 
with examples. It almost felt like I was talking to a new college grad. 
But, it fell short of actually implementing the concepts back in Verilog. 
It probably needs fine-tuning during the training phase with examples 
where the feedback from the simulation can be used to train the models. 
Just thinking out loud."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Gaurav Vaid , hmm.. interesting thoughts."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Rob Sprinkle, I haven't used Haskell personally. So, I won't be able to 
comment on it. I think the quality of the code improved a lot from the 
first pass to when I finally jumped in. It actually does learn when you 
teach new concepts. For example, when I told her that the pipeline names 
were all messed up and it should use strict suffixes like _p0, _p1, etc. 
to distinguish between the pipeline stage signals, it rewrote the code so 
well that it eventually made it easy for me to debug. If we have to 
intervene from the beginning, it defeats the purpose IMO.."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Ivan Djordjevic , reasoning models, as claimed by openAI, are supposed to 
be more intelligent than parrots :)"

"Sharada Yeluri

Författare
Engineering Leader
6 mån

From Open AI: " The models use a sophisticated chain-of-thought reasoning 
process, allowing them to break down intricate problems into manageable 
steps." My experiment aims to see if the model can solve the problem on 
its own. Even then, I broke it down step by step, simplified the problem 
several times, asked it to reset and start over, etc., but I just could 
not get it to solve pipeline hazards. If you have better luck, do let me 
know."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Varun Uniyal I don’t think chipNemo can solve this. But I could be wrong. 
Try ir out…"

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Rajesh Parikh, Great thoughts. Maybe, in addition to more data during 
training, these domain-specific models also need access to verilog 
simulators and test benches written by either humans or other models 
during training, as well as inference."

"Paul Colin Gloster
  • Du [Thou in Swedish - i.e. I]
Researcher at Universidade de Coimbra
6 mån

Dear Sharada Yeluri: Happy New Year! Demand a refund!" She finds this 
comment to be funny. I seriously mean it.

"Sharada Yeluri

Författare
Engineering Leader
(redigerat)
6 mån

Debajyoti Pal, I understand your concerns. But think about it this way: 
around 35-40 years ago, people used to hand-draw the schematics of logic 
gates for their chips. There was no concept of using an EDA tool to 
synthesize the gates. When the EDA tools came out, a lot of design 
engineers protested that the tools did not know how to come up with an 
area-efficient netlist that also meets timing and argued that we should 
still rely on hand-drawn logic gates for high-speed datapath. I remember 
at Sun Microsystems, we used to have special teams to do datapath design 
where the logic to gates was done manually, and engineers did manual P&R. 
Gradually, as the tools became better at what they do, we started 
trusting them for all digital logic design. The reason the tools got more 
advanced is that EDA vendors built feedback systems where the timing is 
fed back to make the synthesis and P&R better. LEC and formal methods 
ensured that the netlist is functionally equivalent to RTL, etc. I see 
the same transition that will eventually happen again, with tools 
generating RTL from high-level specs using advanced reasoning and humans 
using other verifier tools to ensure the generated RTL can be used. It is 
a matter of when not if. IMO"

"Sharada Yeluri

Författare
Engineering Leader
(redigerat)
6 mån

Dawei Wang, Nice to hear from you. I did what you suggested to some 
extent. The Verilog-based TB and RTL are both generated by ChatGPT, and I 
was feeding back the result from the TB as is to ChatGPT so that it can 
fine-tune the RTL until it works."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Raja Ramkaran Reddy Rudravaram, Thank you. Yes, I will try the RAG one 
next."

"Srinivas Lingam, thanks. I agree that model developers haven't yet 
focused on enhancing the model's capabilities for chip design challenges. 
Most are probably giving up too soon with the excuse that they can't find 
enough data. Even with limited data, can the model developers use the 
same RL techniques they have used for math to improve the models for RTL 
coding? Could they use Verilog simulators as verifiers during 
post-training fine-tuning? This, combined with agentic workflows (where 
the generated RTL is continuously checked with simulation by the agents 
and fed back to the model until it converges), could probably yield good 
results. I hope to see more innovation on this front."

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Saurabh Chakraborty , did you try this challenge?"

"Paul Colin Gloster
  • Du
Researcher at Universidade de Coimbra
1 sek

Dear Mister Patrick Lehmann: Demand a refund! Limited LinkedIn does not 
allow having more than one reaction icon set. I set that comment by you 
to insightful, and I also want to set it to funny!"

"Sharada Yeluri

Författare
Engineering Leader
6 mån

Sreenivas Nandam It is not a syntax error. It has one extra pipeline for 
the valid bur, not for the address, so it could never correctly line up 
the read data with the requester. For some reason, it was not able to 
figure that out."

Back to sci.electronics.design | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

AI for FPGA design john larkin <jl@glen--canyon.com> - 2025-08-09 07:09 -0700
  Re: AI for FPGA design Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> - 2025-08-10 21:20 +0200
    Re: AI for FPGA design john larkin <jl@glen--canyon.com> - 2025-08-10 13:32 -0700
      Re: AI for FPGA design Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> - 2025-08-11 01:06 +0200
      Re: AI for FPGA design Bill Sloman <bill.sloman@ieee.org> - 2025-08-11 13:58 +1000
        Re: AI for FPGA design Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> - 2025-08-11 11:25 +0200
          Re: AI for FPGA design Bill Sloman <bill.sloman@ieee.org> - 2025-08-12 16:32 +1000
          Re: AI for FPGA design john larkin <jl@glen--canyon.com> - 2025-08-12 07:51 -0700
    Re: AI for FPGA design "Edward Rawde" <invalid@invalid.invalid> - 2025-08-11 00:36 -0400
      Re: AI for FPGA design Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> - 2025-08-11 11:17 +0200
    Re: AI for FPGA design Niocláisín Cóilín de Ghlostéir <Spamassassin@irrt.De> - 2025-08-11 12:29 +0200
  Re: AI for FPGA design legalize+jeeves@mail.xmission.com (Richard) - 2025-08-11 16:57 +0000

csiph-web