Converting Numbers for Output and Input
with Multiplication and Division
Now that we've debugged getting an input key from the ST's keyboard and outputting its ASCII code value in hexadecimal and binary on the 68000, a natural next step would be to learn how to parse numbers from the input.
But that will require multiplying and dividing by ten, because we usually interact with numbers in decimal base -- radix base ten.
(Yeah, I'm not all that comfortable trying to remember the digits of π in
hexadecimal or binary, either. And I'm not going to go out of my way to
memorize those, particularly when I know how to get a computer to calculate
them any time I need them, as in bc, using obase to set binary
and hexadecimal output radix base:
$ bc -l
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
obase=2
4*a(1)
11.00100100001111110110101010001000100001011010001100001000110100101\
01
obase=16
4*a(1)
3.243F6A8885A308D2A
arctangent(1) is, of course, π/4. Yeah, if you're looking at the final digits
above, the last byte that bc calculates in the scale you specify will
be somewhat incorrect. The scale above is the default of 20 decimal
digits when starting bc with the -l option.)
When you're working in binary, getting numbers in and out in decimal requires converting between binary and decimal.
We call it decimal ("dec" is ten) because it is radix base ten, and we have become used to writing our numbers in (radix) columns using that base. Something to do, as we suppose, with the number of fingers we have.
Converting between binary (radix base two) and decimal (radix base ten) requires multiplying and dividing by two and ten.
Output, Working Left-to-right
One approach to display each digit of a numeric value in decimal is to proceed from left (most significant in our traditional column order) to right (least significant).
We start by finding the largest power of ten smaller than the number and divide the number by that value. The quotient will be the first digit on the left.
Then we repeat with the remainder, until there is no more remainder.
The advantage of this approach is that we can start writing digits down where we are.
The disadvantage is that we have to find the largest power less than the number before we can start.
One way to make it easier to find the largest power of ten smaller is to have
a pre-calculated array of powers of ten to compare the number to.
Output, Working Right-to-left
Another approach is to guess or calculate how much space we need and start from the right (least significant column) and proceed to the left (towards the most significant). Divide the number by ten itself. The remainder is the right-most digit.
Then we repeat with the quotient until the quotient goes to zero.
The advantage of this approach is that we will always be dividing by ten. There is no need to find the largest power of ten smaller to start with.
The disadvantage is that we have to guess how much space to leave -- or calculate it.
But we can avoid either guessing or calculating the amount of space by doing our initial work in a temporary buffer somewhere, then copying the buffer to output.
Efficiency
Which is more efficient depends on a lot of things, but, in many cases, the code for the former can be organized so that it is as if only one actual complete division is performed, the iteration for each column producing one digit. By comparison, the latter approach requires a division for each column, and the division is by a small number, which is the sort of division that takes the most processor cycles.
But if we are just trying to get output going, we may find it easier to allocate the conversion buffer as a process global variable and use the latter method.
Input, Working Left-to-Right
To input a decimal value from the keyboard, we can get each digit in order from left to right, multiplying the accumulated value by ten before adding the digit we got, repeating until there are no more digits entered (or perhaps until the accumulated value overflows).
Input, Working Right-to left
Or we can read all the digits first into a global conversion buffer, count the
number of digits, and multiply each digit by the appropriate power of ten as
we go. And that also requires multiplication.
Multiplication, either way.
Efficiency
Again, efficiency appears to be more on the side of the left-to-right approach. But, again, we my find it easier (more efficient use of programmers' time) to declare the buffer, copy input into the buffer until we get a non-numeric input, and parse/convert the number in the buffer instead of directly from the keyboard.
But thinking about efficiency too early in the planning stages is a mistake,
unless you are actually not thinking about efficiency so much as trying to
understand the problem.
Approaching Implementation
I don't know about you, but I find multiplication to be easier than division.
Why?
Memorizing the multiplication tables is fairly easy, and once we have the table memorized, you can look at each pair of digits from the multiplier and multiplicand and directly produce a digit in the product, with possible carry.
It's a straight-forward input-driven process.
Trying to memorize division tables means memorizing lots of possible products and the factors used to produce them, and there are so many possibilities we don't usually get motivated to do that. (There are certain patterns that we can memorize that help, though.) And then we use what we remember to look at the quotient and guess which product of which pair of factors is applicable.
Even when we do that for each digit in the dividend, we often have to guess
and then we have to check our guesses and, if the our guess is correct, only
then can we reduce the dividend, and count and record each digit of the
quotient.
Essentially, we look at the divisor and the dividend and go searching for the quotient.
Aaaaaaannnddd ---
Checking whether we have found the correct digit of the quotient at each step
requires multiplication.
Erk. Does it feel like we're being corralled into understanding multiplication?
Let's look at multiplication.