Showing posts with label BCD. Show all posts
Showing posts with label BCD. Show all posts

Monday, December 23, 2024

ALPP 03-XX (15) -- Numeric Output Conversion and Multiplying by Constants (Theory)

I'm leaving this here for reference, for the moment. Eventually, I intend to do a chapter including focus on shifts, and some of this will find place there.
You probably want to go here: https://joels-programming-fun.blogspot.com/2025/01/alpp-03-15-converting-numbers-output-input-multiplication-division-theory.html.


Numeric Output Conversion
and Multiplying by Constants
(Theory)

(Title Page/Index)

 

Now that we've debugged getting an input key from the ST's keyboard and outputting its ASCII code value in hexadecimal and binary on the 68000, a natural next step would be to learn how to parse numbers from the input. 

But that will require multiplying and dividing by ten.

Why? Because we usually interact with numbers in decimal base -- radix base ten. 

(Yeah, I'm not all that comfortable trying to remember the digits of π in hexadecimal or binary, either. And I'm not going to go out of my way to memorize those, particularly when I know how to get a computer to calculate them any time I need them, as in bc, using obase to set binary and hexadecimal output radix base:

$ bc -l
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'. 
obase=2
4*a(1)
11.00100100001111110110101010001000100001011010001100001000110100101\
01
obase=16
4*a(1)
3.243F6A8885A308D2A

arctangent(1) is, of course, π/4. Yeah, if you're looking at the final digits above, the last byte that bc calculates in the scale you specify will be somewhat incorrect. The scale above is the default of 20 decimal digits when starting bc with the  -l option.)

When you're working in binary, getting numbers in and out in decimal requires converting between binary and decimal, and converting between binary and radix base ten requires multiplying and dividing by ten. 

To display each digit of a value in decimal going right, we have to divide by the largest power of ten we can, and convert the quotient to the ASCII code for that digit, repeating with the remainder until the remainder is less than ten.  

Or we can work going to the left, by dividing by ten, converting the remainder to the ASCII code for that digit, repeating with the quotient until the quotient is less than ten.

Division, either way.

To input a decimal value from the keyboard, we get each digit in order, multiplying the accumulated value by ten before adding the digit we got, repeating until there are no more digits entered (or until the accumulated value overflows). 

Or we can read all the digits first, count the number of digits, and multiply each digit by the appropriate power of ten as we go, and that also requires multiplication.

Multiplication, either way.

While we can do general multiplication and division on the 68000, we haven't really talked about it. And we haven't looked at how to synthesize multiplication and division on 8-bit CPUs that don't have them. I'll show you general routines for multiplication and division pretty soon, but I want to show why they work (and give you some clues about how to speed them up) by introducing multiplying and dividing by constants, which, by more than coincidence, can be useful in decimal input and output.

But we've already been multiplying and dividing by two and sixteen, haven't we? 

Haven't we?

Let's look again at getting both binary and hexadecimal output. We need to understand what we are doing there.

When converting to binary from decimal or hexadecimal by hand, the usual approach (ignoring fractions) is 

Set the radix point (fraction/decimal point) on the right.
Do until all digits (bits) converted (until quotient is zero):
  Divide the number by 2, keeping both quotient and remainder.
  Write the remainder down as the next digit,
    going left from the radix point.
  Repeat with the quotient.

Now, even taking into account that this algorithmic description is rather loose, looking at what we were doing in the 6800 chapter, it looks different, doesn't it? (We were, in fact, converting to external binary from what we could call internal radix base 256. And that's not being absurd to say it that way, no.) 

I mean, even ignoring the additional step of converting the remainder to a character for output, it's different. We were going left-to-right, and not even noticing the radix point until we were done, if then.

Let's look at the 6809 code for binary output again (since I think the 6809 code is easier to read):

* Output a 0
OUT0	LDB	#'0
OUT01	PSHU	D
	LBSR	OUTC
	RTS
*
* Output a 1 
OUT1	LDB	#'1
	BRA	OUT01
* Rob code, shave a couple of bytes, waste a few cycles.
*
* Output the 8-bit binary (base two) number on the stack.
* For consistency, we are passing the byte in the low-order byte
* of a 16-bit word.
OUTB8	LDB	#8	; 8 bits
	STB	0,U	; Borrow the upper byte of the parameter.
OUTB8L	LSL	1,U	; Get the leftmost bit of the lower byte.
	BCS	OUTB81
OUTB80	BSR	OUT0
	BRA	OUTB8D
OUTB81	BSR	OUT1
OUTB8D	DEC	,U
	BNE	OUTB8L	; loop if not Zero
	LEAU	2,U	; drop parameter bytes
	RTS

In modified human English, that's going to look like

Do:
  Shift the bits left, capturing the bit carried off the top.
  Convert the captured bit to a character and
    write it down as the next digit,
    going right.
  Repeat until no bits remain to be converted.

Yep. Going the opposite direction. And the radix point just ended up where we stopped.

Completely backwards!

What's going on here?

You'll remember that I mentioned that shifting digits to the left (shifting the radix point to the right and filling with zeroes) is the same as multiplying by the radix.

You don't remember that I said that?

What did I say? Ah, here it is, in the chapter on hexadecimal output on the 6800:

... shifting is division and multiplication by powers of two. ...

a little before talking about moving the radix point in decimal numbers, which is the same as shifting decimal digits.

So, shifting bits to the left is multiplying by two. And shifting bits to the right is dividing by two.

And when we grabbed the bit that came off the high end into the carry, we were just grabbing the bits as the came off the top, right?

Here's how I want to see that. On the one hand we were multiplying by two. On the other hand, the top bit came off into the carry, and we grabbed it. So we were shifting right by 7 grabbing the quotient (from the carry). 

Which is dividing by 27 -- dividing by 128ten.

This is because the byte is 8 bits, and the 8 bit register forms something mathematicians call a ring, which we aren't going to describe in detail because I don't want to put everyone to sleep.

But it's mathematics. We can rely on it once we understand it. Multiplication in a ring is division, and sometimes that is useful.

Now, we could do this:

NUMBUF	RMB	34	; enough for 32 bits of output
*
CNVB8	TFR	DP,A	; point to the direct page
	CLRB
	TFR	D,Y
	LEAY	NUMBUF-LOCBAS,Y	; point to NUMBUF
	LEAY	9,Y	; start at the right
	CLR	,-Y	; NUL terminate it
	LDA	#8	; 8 bits
CNVB8L	LDB	#'0'	; ASCII '0'
	LSR	1,U	; Get the lowest bit into the carry
	ADCB	#0	; convert it to ASCII
	STB	,-Y	; build the string right-to-left
CNVB8D	DECA
	BNE	CNVB8L	; loop until counted out
	STY	,U	; return the address of the buffer
	RTS		; (this ought to work, anyway)

And that would be in the order of working right-to-left, and then we could take the address that CNVB8 returns and pass it off to OUTS, and print the number as a string.

But that would require an intermediate buffer (NUMBUF above), and I want to be able to output binary and hexadecimal without intermediate buffers. (And without explicit multiply and divide instructions.)

The intermediate buffer is where we set the radix point on the right so we can chop the less significant digits off first and write them down going to the left. 

Intermediate buffers make debugging more difficult.

You can see that I would want to be able to output decimal numbers without intermediate buffers, too, right? Maybe?

Can it be done? 

(If you aren't really following me, well, stick around for the ride. It does eventually begin to make sense. I think.)

We've seen that it can be done with hexadecimal. In the 6809 code we had

ASC0	EQU	'0	; Some assemblers won't handle 'c constants well.
ASC9	EQU	'9
ASCA	EQU	'A
ASCXGAP	EQU	ASCA-ASC9-1	; Gap between '9' and 'A' for hexadecimal
*
* Mask off and convert the nybble in B to ASCII numeric,
* including hexadecimals
OUTH4	ANDB	#$0F	; mask it off
	ADDB	#ASC0	; Add the ASCII for '0'
	CMPB	#ASC9	; Greater than '9'?
	BLS	OUTH4D	; no, output as is.
	ADDB	#ASCXGAP	; Adjust it to 'A' - 'F'
OUTH4D	CLRA
	STD	,--U
	LBSR	OUTC
	RTS
*
* Output an 8-bit byte in hexadecimal,
* byte as a 16-bit parameter on PSP.
OUTHX8	LDB	1,U	; get the byte
	LSRB
	LSRB
	LSRB
	LSRB
	BSR	OUTH4
	LDB	1,U
	BSR	OUTH4
	LEAU	NATWID,U
	RTS

You can see we were capturing the high nybble (four bits) by shifting left four times, right?

Shifting right four is the same as shifting left four, capturing as we go, right correct? (Sorry about that.)

And, rather than shifting the low four back into place, we can grab the top bits from a copy, and then grab the bottom bits from the original, masking the top bits off.

Multiplying by sixteen in an 8-bit ring is dividing by sixteen in the 8-bit ring (within the conditions of the ring).

With a little thought, we could figure out how to output the character code in radix base 4 (quarternary) or 8 (octal) by this method, as well. Any power of 2 would be just a matter of shifting the bits appropriately and capturing and adjusting the resultant output value to a symbol that represents the output value. 

One problem with octal base -- radix base eight -- is that 8 is 23, which requires 3 bits per octal digit. And, where we can fit 2 hex digits exactly in an 8-bit byte, or four quaternary digits exactly, octal digits end up with a bit left over. Two bits, I mean. (Sorry. ;)

Which means the left-most digit when converting a byte has only two bits, and can only range between 0 and 3 instead of 0 and 7. And you have to account for that as you convert.

Which means that, converting a byte from left-to-right, when you start by shifting a digit off the top, you have to shift only 2 bits for the first digit. Instead of multiplying by 8 (which is dividing by 25, or dividing by 32) the first time, you have to multipy by 4 (divide by 26, or 64) to get that first octal digit on the left.

Thinking about this, 64 is the maximum power of 8 that fits in a byte. 29, 512, does not fit in a byte. Keep this in mind.

And if you're converting a byte to octal from right to left, you still have to remember to only shift twice on the last shift.

How about base ten, then? Can we do something like this with base ten?

Octal cuts binary integers up three bits per octal digit. Hexadecimal cuts it up into four bits per hex digit. How many bits per decimal digit? It's clearly not a whole number of bits, and we don't know how to shift by anything but whole numbers of bits, so it doesn't look hopeful.

As a digression (8-o), say you encode decimal in four bits per decimal digit. Could this work?

Let's see.

In hexadecimal, we can record a digit from  0sixteen to Fsixteen in four bits. So, what if we decide to only record digit values 0 through 9? It's a little bit wasteful, but it's enough to encode a decimal digit in four bits.

Let's see it:

0: 0000
1: 0001
2: 0010
3: 0011
4: 0100
5: 0101
6: 0110
7: 0111
8: 1000
9: 1001

Yep, it can be encoded. 

This is binary coded decimal, or BCD.

But, 10011001two ($99 =>  99sixteen) is (128 + 16 + 8 + 1), which is equal to 153ten

Where  10011001BCD (also $99) is 99ten

Eaaaoooooohhh confusion! 

It turns out you can add and subtract directly in BCD, although it takes an extra step or so to handle carries correctly. And, of course, you can multiply and divide, and the algorithms look like what you'd do by hand. And of course shifting left one BCD digit (four bits) at a time does work out to multiplying by ten, and shifting right one BCD digit at a time works out to dividing by ten.

But it's a bit wasteful of bits. 

In BCD, you can encode numbers from 0 to 99,999,999 in four bytes (eight nybbles), or 32 bits. 

0 to 99,999,999 in binary (00000000000000 to 101111101011110000011111111) requires only 27 bits, which fits in just less than six and a half bytes (one bit short of seven nybbles).

Okay, it's not really all that wasteful. (In fact, if I understand correctly, my favorite multi-precision command-line *nix tool, bc, demonstrated again above, operates in BCD.)

But now we have issues when we want to convert BCD to binary, so that we can use numbers as addresses and such. It just shifts the problems around. (In bc, we usually aren't working with addresses, by the way.)

Yes, we'll have to talk about BCD at some point.

The purpose of the digression was to try to give you a little more space and perspective before we dig into multiplying by shifts and adds. 

This theoretical stuff is getting long. Let's look at some actual code to multiply by constant powers of 2.  

(Title Page/Index)


Wednesday, December 18, 2024

ALPP 03-XX -- Radix Output

False start, kept for reference.

Radix  Output

(Title Page/Index)

 

Now that we've debugged getting a key from the ST's keyboard and outputting its ASCII code value in hexadecimal and binary on the 68000, a natural next step would be to learn how to parse numbers from the input. But that will require multiplying and dividing by ten.

Why? Because we usually interact with numbers in decimal base -- radix base ten.

While we can do that on the 68000, we haven't really talked about it, and we haven't looked at how to synthesize multiplication and division on 8-bit CPUs that don't have them.

So, instead of going directly to parsing numbers, I want to look at multiplication and division, at least enough to be able to multiply and divide by ten. 

But we've already been multiplying and dividing by two and sixteen, haven't we? 

Haven't we?

Let's look again at getting both binary and hexadecimal output. We need to understand what we are doing there.

When converting to binary from base ten by hand, the usual approach (ignoring fractions) is 

Set the radix point (fraction/decimal point) on the right.
Do until all digits (bits) converted (until quotient is zero):
  Divide the number by 2, keeping both quotient and remainder.
  Convert the remainder to a character and 
    write it down as the next digit,
    going left from the radix point.
  Repeat with the quotient.

Now, even taking into account that this algorithmic description is rather loose, looking at what we were doing in the 6800 chapter, it looks different, doesn't it? 

We were going left-to-right, and not even noticing the radix point until we were done, if then.

Let's look at the 6809 code again (since I think the 6809 code is easier to read):

* Output a 0
OUT0	LDB	#'0
OUT01	PSHU	D
	LBSR	OUTC
	RTS
*
* Output a 1 
OUT1	LDB	#'1
	BRA	OUT01
* Rob code, shave a couple of bytes, waste a few cycles.
*
* Output the 8-bit binary (base two) number on the stack.
* For consistency, we are passing the byte in the low-order byte
* of a 16-bit word.
OUTB8	LDB	#8	; 8 bits
	STB	0,U	; Borrow the upper byte of the parameter.
OUTB8L	LSL	1,U	; Get the leftmost bit of the lower byte.
	BCS	OUTB81
OUTB80	BSR	OUT0
	BRA	OUTB8D
OUTB81	BSR	OUT1
OUTB8D	DEC	,U
	BNE	OUTB8L	; loop if not Zero
	LEAU	2,U	; drop parameter bytes
	RTS

In human language, that's going to look like

Do:
  Shift the bits left, capturing the bit off the top.
  Convert the captured bit to a character and
    write it down as the next digit,
    going right.
  Repeat until no bits remain to be converted.

Yep. Going the opposite direction. And the radix point just ended up where we stopped.

Completely backwards!

What's going on here?

You'll remember that I mentioned that shifting digits to the left (shifting the radix point to the right and filling with zeroes) is the same as multiplying by the radix.

You don't remember that I said that?

What did I say? Ah, here it is, in the chapter on hexadecimal output on the 6800:

... shifting is division and multiplication by powers of two. ...

a little before talking about moving the radix point in decimal numbers, which is the same as shifting decimal digits.

So, shifting bits to the left is multiplying by two. And shifting bits to the right is dividing by two.

And when we grabbed the bit that came off the high end into the carry, we were just grabbing the bits as the came off, right?

Here's how I want to see that. On the one hand we were multiplying by two. On the other hand, the top bit came off into the carry, and we grabbed it. So we were shifting left bey7. 

Which is dividing by 27, dividing by 128.

This is because the byte is 8 bits, and the 8 bits form something mathematicians call a ring, which we aren't going to describe in detail because I don't want to put everyone to sleep.

But it's mathematics. We can rely on it. Multiplication in a ring is division, and sometimes that is useful.

Now, we could do this:

NUMBUF	RMB	34	; enough for 32 bits of output
*
CNVB8	TFR	DP,A	; point to the direct page
	CLRB
	TFR	D,Y
	LEAY	NUMBUF-LOCBAS,Y	; point to NUMBUF
	LEAY	9,Y	; start at the right
	CLR	,-Y	; NUL terminate it
	LDA	#8	; 8 bits
CNVB8L	LDB	#'0'	; ASCII '0'
	LSR	1,U	; Get the lowest bit into the carry
	ADCB	#0	; convert it to ASCII
	STB	,-Y	; build the string right-to-left
CNVB8D	DECA
	BNE	CNVB8L	; loop until counted out
	STY	,U	; return the address of the buffer
	RTS		; (this ought to work, anyway)

Now we can take the address that CNVB8 returns and pass it off to OUTS, and print the number as a string.

 

 

*********

 

 

With a little thought, we could figure out how to output the character code in base four or eight, as well. Any power of 2 would be just a matter of shifting the bits appropriately and adjusting the resultant value to a symbol that represents the value. 

For any base ten or less, the adjustment is really straightforward in ASCII-based characters -- just adding the value to the ASCII-based character code value of '0'. (This works for UNICODE, too, but we won't be going there.) 

For any base up to sixteen, if the resultant symbol exceeds the ASCII value of '9',  we further add one less than the difference between the ASCII for 'A' and the ASCII for '9'. Or we can test the value first and add the appropriate adjustment in one step: 

  • ASCII for '0' if less than or equal to 9 
  •  and ASCII for ten less than 'A' if greater than 9. 

We saw the former method in the 6800 code for hexadecimal output:

ASC0	EQU	'0	; Some assemblers won't handle 'c constants well.
ASC9	EQU	'9
ASCA	EQU	'A
ASCXGAP	EQU	ASCA-ASC9-1	; Gap between '9' and 'A' for hexadecimal
*
* Mask off and convert the nybble in B to ASCII numeric,
* including hexadecimals
OUTH4	ANDB	#$0F	; mask it off
	ADDB	#ASC0	; Add the ASCII for '0'
	CMPB	#ASC9	; Greater than '9'?
	BLS	OUTH4D	; no, output as is.
	ADDB	#ASCXGAP	; Adjust it to 'A' - 'F'
	...

This would also work as it is for a radix higher than sixteen, if we accept the approach usually taken in radix eleven through sixteen and continue with it, up to base 36 (highest valued digit 'Z').

There are reasons we may not want to do that, but it could be done. 

Anyway, we know we can handle the adjustment in the cases that interest us most. 

Now, let's look again at how we got each digit.

For binary, it was easy. Shift a bit off the left (high-order) end of the binary integer and convert it to ASCII '0' or '1':

* Output a 0
OUT0	LDAB	#'0
OUT01	JSR	PPSHD
	JSR	OUTC
	RTS
*
* Output a 1 
OUT1	LDAB	#'1
	BRA	OUT01
* :::
OUTB8L	LSL	1,X	; Get the leftmost bit.
	BCS	OUTB81
OUTB80	BSR	OUT0
	BRA	OUTB8D
OUTB81	BSR	OUT1

For hexadecimal, it may not be quite as clear that was what we were doing -- shifting a digit's worth of bits off the left, capturing them, and converting them to ASCII:

	LDAB	1,X	; get the byte
	LSRB		; move the hexadecimal digit into place
	LSRB
	LSRB
	LSRB
	BSR	OUTRAD	; convert to ASCII and output
	LDX	PSP
	LDAB	1,X
	ANDB	#$0F	; mask the high digit off
	BSR	OUTRAD	; convert to ASCII and output

Say WHAT?!?!? Those are right shifts! And then no shifts! just a bit-AND to mask off the  ...

Yeah, it would have been a little bit more plain like this:

	CLRB		; ready to capture high four bits
	LSL	1,X	; get high bit off top 
	RORB		; capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	BSR	OUTRAD
	CLRB		; ready to capture next four bits
	LSL	1,X	; get next bit off top 
	RORB		; capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	LSL	1,X	; get next bit off top 
	RORB		; shift over and capture it
	BSR	OUTRAD

If you can't see from just reading the code that the result in B and the output is the same, go ahead and substitute these lines of code into the code for chapter 03-05 and trace through it, watching the bits shift around. 

In either method, we shift the high four bits of the byte we're putting out in order, into the low four bits of B. 

Then, in the method above, we shift the low four bits back into the low four bits where they came from. But in the method of chapter 03-05 way we just leave them there and mask the high bits off.

If you think of the byte register as a ring of 8 bits, you might be able to see the bits coming back around. 

There's another way of looking at it. In chapter 03-05, we noted that shifting digits left one column is the same as multiplying by the radix. 

Shifting a decimal number left (by adding a zero to the right and moving the decimal fraction point to the right of the added zero) is the same as multiplying by ten. 

Shifting a binary number left one bit is the same as multiplying it by two.

Shifting a hexadecimal digit left by one digit is the same as multiplying by sixteen. Or, shifting a binary number left by four bits is the same as multiplying by sixteen.

How about shifting right? It's the same as dividing by the radix. We'll look at that in a bit.

So, back to thinking about output in base 4. 

If we want to output in base four, we can shift two bits left, capturing and outputting each pair as we go. Do it four times and we've got the byte output in quaternary base.

How about octal? If we shift three bits, then three more, we've only got two left, and that doesn't work. So what we should have done is recognize that we only had 8 bits to shift and only shifted two bits to start, then shifted three and three.

Why? 

It's helpful to note that or FFsixteen (all bits 1, 255ten) is 377eight. That's how you wright the maximum value of an 8-bit byte in base eight. And the high digit of that is 3, which only takes two bits in binary. So it makes sense that you would only shift off two bits for the first digit.

Now, if you were doing two bytes at once, that would be five sets of three bits and one bit for the most significant digit. 177777eight. is how you write FFFFsixteen (65535ten), the maximum value of a sixteen-bit number, for octal. For binary, it's sixteen digits: 11111111two. For quaternary, it's eight digits: 33333333four.

So we are getting some ideas how output in base four or base eight would work, and how to output sixteen bit values in any radix base that is a power of two. It's just shifting.

But base ten doesn't work like this.

Why?

Let me take you on a short detour through something called binary-coded decimal (BCD).

In hexadecimal, we can record a digit from  0sixteen to Fsixteen in four bits, right?

Well, what if we decide to only record digit values 0 through 9? It's a little bit wasteful, but it's enough to encode a decimal digit in four bits.

Let's see it:

0: 0000
1: 0001
2: 0010
3: 0011
4: 0100
5: 0101
6: 0110
7: 0111
8: 1000
9: 1001

Yep, it can be done. 

But, 10011001two (99sixteen) is (128 + 16 + 8 + 1) equal to 153ten

Where  10011001BCD is 99ten. Eaaaoooooohhh confusion!

But maybe we can see that shifting a BCD number four bits to the left is multiplying by ten? Maybe?

It is. We can play with that later. Let's set BCD aside for a moment.

The point is that, where we can divide binary numbers into fields of n bits for any radix base 2n, and we can even do something like that for binary coded decimal, trying to divide a straight binary number into fields of radix base ten is going to have us trying to use fractions of bits.

And we don't now how to do that.

I don't think anyone knows a good, simple way to do it, other than repeatedly dividing by ten, which isn't very simple in binary (which is why this chapter is so long). 

Dividing is shifting right. Right? (Sorry.)

It is.

I pointed out that shifting left 1 bit is the same as shifting right 7? Well, if you capture the bits correctly, anyway.

I'm going to use 6809 code for this example instead of 6800, because we can focus a bit better on what we are doing without having a lot of DEX instructions getting in the way.

Here's what we did for the 6809 binary output in chapter 03-03

OUTB8	LDB	#8	; 8 bits
	STB	0,U	; Borrow the upper byte of the parameter.
OUTB8L	LSL	1,U	; Get the leftmost bit of the lower byte.
	BCS	OUTB81
OUTB80	BSR	OUT0
	BRA	OUTB8D
OUTB81	BSR	OUT1
OUTB8D	DEC	,U
	BNE	OUTB8L	; loop if not Zero

Instead of shifting out the top and capturing the carry (multiplying by two and capturing the overflow), and writing the number left-to-right, let's divide by two and build the output string for the number right-to-left:

NUMBUF	RMB	34	; enough for 32 bits of output
*
CNVB8	TFR	DP,A	; point to the direct page
	CLRB
	TFR	D,Y
	LEAY	NUMBUF-LOCBAS,Y	; point to NUMBUF
	LEAY	9,Y	; start at the right
	CLR	,-Y	; NUL terminate it
	LDA	#8	; 8 bits
CNVB8L	LDB	#'0'	; ASCII '0'
	LSR	1,U	; Get the lowest bit into the carry
	ADCB	#0	; convert it to ASCII
	STB	,-Y	; build the string right-to-left
CNVB8D	DECA
	BNE	CNVB8L	; loop until counted out
	STY	,U	; return the address of the buffer
	RTS		; (this ought to work, anyway)

Now we can take the address that CNVB8 returns and pass it off to OUTS, and print the number as a string.

And we could take the same approach with the hexadecimal conversion, dividing by sixteen -- shifting four bits right and capturing them in order -- and converting and storing right-to-left.

(But we would actually make a copy, mask the high bits out, convert and store, then divide by sixteen for the next digit, because it's quicker that way. But we will ignore the optimization.)

If we think about it, when we convert from decimal to binary or hexadecimal by hand, that's the way we do it. We divide by the base we are converting to, capture the remainder and write that, writing from right-to-left. And it works from decimal to binary or hexadecimal. Or any radix base to any radix base.

Why not do that in the first place?

Several reasons. One is that it's useful to be able to get numbers in and out without sending them through a conversion string buffer. Another is that shifting bits in registers and memory is one of the more useful things you can learn about, especially for assembly language. Yet another is, well, ...

Now you know that dividing and multiplying by powers of two is easy, right?

On the 68000, we have general multiply and divide, at least for 16 bits.

On the 6809 and 6801, we have byte multiply to 16 bits. No divide. (Multiply is much easier than divide.)

On the 6800 we have no multiply and no divide.

We are going to have to synthesize some multiplication and division. Also, even on the 68000, multiplication and division cost more than shifts in CPU cycle counts.

It would be nice to be have a quick way to multiply and divide by constants other than powers of 2, wouldn't it? Especially by ten?

Why, yes, it would. Let's do it. Multiplication is easier. Let's do some middle-school algebra:

10X == 2(5X)

5X == (4 + 1)X => 10X == 2((4+1)X)

(4 + 1)X == 4X + X => 10X == 2(4X + X)

4X == 2(2X) => 10X == 2(2(2X)+X)

Let's build that up from adds and shifts:

*
MUL10	LDD	,U	; X
	CLR	,-U	; for overflow (parameter 1 off)
	CLR	,-U	; 16 bits (parameter 2 off)
	ASLB		; 2X
	ROLA
	ROL	1,U
	ASLB		; 2(2X)
	ROLA
	ROL	1,U
	ADDD	2,U	; 2(2X)+X
	BCC	MUL10N
	INC	1,U
MUL10N	ASLB		; 2(2(2X)+X) == 10X
	ROLA
	ROL	1,U
	STD	2,U
	RTS

 

 


(Title Page/Index)