CrimsonHorn
Programmer
The project i'm working on is simple and and I have run into what I can only assume is a simple road block. I am tring to convert binary numbers into decimal. I have the actuall structure worked out, the problem I am runing into is that the largest number i can convert is somewhere in the neighborhood of 1020 something (the number 1111111111 in binary). I am not satisfied with having this limitation in my program. My approach is simple, I take the number as inputed from the keyboard and am currently storing it into a usigned long int
and using the tried and true % / method to evaluate and add the appropriate Multipul of Two to my decimal output variable.
Any suggestions would be greatly appreciated
Stephen
and using the tried and true % / method to evaluate and add the appropriate Multipul of Two to my decimal output variable.
Any suggestions would be greatly appreciated
Stephen