### FPGA implementation

Posted:

**Tue Oct 28, 2014 12:59 am**Hey guys, first let me say what amazing work you have all already done in accurately recreating the original OPL3. So much of the work in matching the bit-accuracy of original design has been done by you guys. It's very impressive.

So as some of you may know, I've begun implementing the OPL3 in an FPGA. I'm using SystemVerilog for the design and Octave for analysis. I'll be targeting a Xilinx Zynq-7000 on a Digilent Zybo board--it's a pretty cheap board that has a pretty nice DAC on it. The ARM cores are there for when I get around to the software end of things.

So far I've got the start of an Operator: the phase increment and accumulator, including the original log sine and exp LUTs implemented in ROMs. The paper that Steffen wrote helped a lot. The details of all the math are a bit over my head, but it's very interesting and clever how they were able to sneak the gain in there as an addition using no multipliers. Though multipliers are trivial now in FPGAs I suppose it was different back in the early 90s (when I was barely a teenager).

So the output of my Operator looks pretty good so far I think. Increasing env decreases the gain, as we expect I believe. Interesting as we get to numbers over 300 the output becomes so tiny its almost not really useable. What I've noticed however are fairly prominent glitches on my output occasionally but regular and periodic:

At first I thought it was due to errors in 1s complement math; I changed to 2s complement as a test and the errors became less frequent but still apparent. Perhaps these are also present in the original design as well. Is this something you guys have noticed? It could very well be I'm introducing some errors somewhere. I'm using a 20-bit accumulator, and here's the relevant code (I know it's a language might not be familiar with but you can probably get the gist):

I notice the glitches seem to occur when the MSB of the output of the log sin LUT doesn't quite toggle:

It sort of makes sense that it happens because there's only one value in the sin log LUT that has the MSB set, so if it's off by one the value will be off by so much. And then that quantization error is then multiplied by the exp. If it's inherant to the design I'm fine with it, I just want to verify that it's correct. What do you guys think?

So as some of you may know, I've begun implementing the OPL3 in an FPGA. I'm using SystemVerilog for the design and Octave for analysis. I'll be targeting a Xilinx Zynq-7000 on a Digilent Zybo board--it's a pretty cheap board that has a pretty nice DAC on it. The ARM cores are there for when I get around to the software end of things.

So far I've got the start of an Operator: the phase increment and accumulator, including the original log sine and exp LUTs implemented in ROMs. The paper that Steffen wrote helped a lot. The details of all the math are a bit over my head, but it's very interesting and clever how they were able to sneak the gain in there as an addition using no multipliers. Though multipliers are trivial now in FPGAs I suppose it was different back in the early 90s (when I was barely a teenager).

So the output of my Operator looks pretty good so far I think. Increasing env decreases the gain, as we expect I believe. Interesting as we get to numbers over 300 the output becomes so tiny its almost not really useable. What I've noticed however are fairly prominent glitches on my output occasionally but regular and periodic:

At first I thought it was due to errors in 1s complement math; I changed to 2s complement as a test and the errors became less frequent but still apparent. Perhaps these are also present in the original design as well. Is this something you guys have noticed? It could very well be I'm introducing some errors somewhere. I'm using a 20-bit accumulator, and here's the relevant code (I know it's a language might not be familiar with but you can probably get the gist):

- Code: Select all

opl3_log_sine_lut log_sine_lut_inst (

.theta(phase_acc[18] ? ~phase_acc[17:10] : phase_acc[17:10]),

.out(log_sin_out),

.*

);

always_ff @(posedge clk)

log_sin_plus_gain <= log_sin_out + (env << 3);

opl3_exp_lut exp_lut_inst (

.in(~log_sin_plus_gain[7:0]),

.out(exp_out),

.*

);

always_ff @(posedge clk)

tmp_out0 <= (2**10 + exp_out) << 1;

always_ff @(posedge clk)

if (phase_acc[19])

out <= ~(tmp_out0 >> log_sin_plus_gain[LOG_SIN_PLUS_GAIN_WIDTH-1:8]);

else

out <= tmp_out0 >> log_sin_plus_gain[LOG_SIN_PLUS_GAIN_WIDTH-1:8];

I notice the glitches seem to occur when the MSB of the output of the log sin LUT doesn't quite toggle:

It sort of makes sense that it happens because there's only one value in the sin log LUT that has the MSB set, so if it's off by one the value will be off by so much. And then that quantization error is then multiplied by the exp. If it's inherant to the design I'm fine with it, I just want to verify that it's correct. What do you guys think?