Raymondkoehler
Technical User
Hi, i'm Raymond from the Netherlands and currently programming in VHDL.
I think i do not quite understand what happends if i subtract two integer values.
What i have is 2 adc's and 1 dac.
1 adc has a 14bit 2's complementcode as a output.
1 adc has a 8 bit offsetbinarycode as a output.
The dac has a 14bit offsetbinarycode as a input.
Now want to subtract both adc values and put the result on the dac. I did the following:
I translated the 2's complementvalue of the 14bits adc into a offsetbinary value by inverting the msb-bit.(this is possible and correct). Then i converted it into a integer.
I translated the offsetbinary value from the 8 bit adc into a 14bit value by converting it into a integer and multiplying that with "64". Now i subtract both signals (as a integer) and reconvert that value into a std_logic_vector again to put it on the dac.
Compleet rubbish is comming out of the dac. IT really seems like suddenly there is a offset in the dac output if i pass sertain values.
It is not the blame of the conversionfunctions caus i tested them independently (without subtracting), and then it is working perfectly. So only if i use the subtraction it is not working.
I suspect the signbit is the problem here, or something with a underflow.
Can somebody direct me in this? I'm reaching the bottom off my idea's
Thanks in advance,
Raymond
I think i do not quite understand what happends if i subtract two integer values.
What i have is 2 adc's and 1 dac.
1 adc has a 14bit 2's complementcode as a output.
1 adc has a 8 bit offsetbinarycode as a output.
The dac has a 14bit offsetbinarycode as a input.
Now want to subtract both adc values and put the result on the dac. I did the following:
I translated the 2's complementvalue of the 14bits adc into a offsetbinary value by inverting the msb-bit.(this is possible and correct). Then i converted it into a integer.
I translated the offsetbinary value from the 8 bit adc into a 14bit value by converting it into a integer and multiplying that with "64". Now i subtract both signals (as a integer) and reconvert that value into a std_logic_vector again to put it on the dac.
Compleet rubbish is comming out of the dac. IT really seems like suddenly there is a offset in the dac output if i pass sertain values.
It is not the blame of the conversionfunctions caus i tested them independently (without subtracting), and then it is working perfectly. So only if i use the subtraction it is not working.
I suspect the signbit is the problem here, or something with a underflow.
Can somebody direct me in this? I'm reaching the bottom off my idea's
Thanks in advance,
Raymond