AtomicChip
Programmer
Hey all,
Just looking for an explanation of how and why this works:
Now, I understand that C++ will do auto type-casting 'n stuff like in the following example:
...but why does subtracting a '0' ascii value from an ascii char result in a proper int value??
Thanks in advance..
-----------------------------------------------
"The night sky over the planet Krikkit is the least interesting sight in the entire universe."
-Hitch Hiker's Guide To The Galaxy
Just looking for an explanation of how and why this works:
Code:
const char* foo = "123";
int n = foo[ 0 ] - '0'; // resulting in the int value 1
Now, I understand that C++ will do auto type-casting 'n stuff like in the following example:
Code:
const int n = 3;
const float m = 1.1;
const int l = n*m; // will result in 3
...but why does subtracting a '0' ascii value from an ascii char result in a proper int value??
Thanks in advance..
-----------------------------------------------
"The night sky over the planet Krikkit is the least interesting sight in the entire universe."
-Hitch Hiker's Guide To The Galaxy