Hello again, I hate to ask so many questions, but I can't find any info on this elsewhere.
Does anyone know of a way to generate random numbers on the NES?
I have to generate random numbers between a few different ranges (0-F, 0-1, and 0-3) and was wondering if anyone knew how.
I believe that in some other lower level languages this can be done by reading an address that changes faster than the system can accurately read it, thus providing mixed results. That would then give a single random bit that could be added to other random bits to create a larger range. Is there any method like this for the NES?
One common method to generate pseudorandom numbers on 8-bit computers is a
polynomial counter. Seed it with the time to get through the menus, then clock it once for 0-1, twice for 0-3, or four times for 0-15. See the source code for
LJ65,
Concentration Room, and
Russian Roulette for an example of a 32-bit Galois LFSR used as a PRNG.
I just use this routine, since you can
never prove that it's not random:
Code:
random:
lda #9
rts
Really, to generate a random number start out with a seed value that's semi-random. I usually have the player press Start or something at the beginning of the game some time, and that usually comes before I need to use the random number generator. I just take the amount of time it takes the player to press start in frames and use that as the seed value. It's almost impossible to get it to have the exact same value. Then you take that value, shift it, eor it with the VBLCount (an 8-bit variable increasing by 1 every frame), subtract 5 from it, add it to some other variable, etc. Then you save that value as the seed for the next time you use the routine. You basically mutilate the seed value by performing calculations on it with a bunch of variables, and the end result is usually pretty random.
Yeah it's mostly a matter of generating the seed, then use an LFSR that doesn't repeat for long enough. If the seed is zero however, the LFSR will always be stuck at zero.
I use a simple, small routine like the one described in
this page.
To increase randomness across frames, I call this routine several times while waiting for the NMI in order to advance a "random" amount of numbers in the sequence.
Further to this, any good methods for generating an 8-bit number within a range without doing (too much) multiplication or division (i.e. fast)?
Tokumaru's method seems the least processor intensive, but I need to generate numbers within ranges, specifically, I need to generate 16 individual random binary values. Is there a way to read a single bit of a number stored in hex?
And it.
and #%00000001
Will make it 0 or 1. You can then beq or bne depending on the condition.
and #%00001111
will make it a number between 0 and 15.
Oh, so I can address a hex value as if it were a binary value and it'll work just fine?
That's what I was hoping, but I thought I might have to convert it first.
This is awesome, and makes my life 100 times easier, thanks!
Kasumi, that only works if you need a range that's a power of 2. It's more involved if you need another range, especially if it needs to be uniformly distributed. Maybe you can adjust your algorithm so that it works for values in a power-of-2 range.
Zsy wrote:
Oh, so I can address a hex value as if it were a binary value and it'll work just fine?
Internally, all computers store numbers in binary. Hex is just a (usually) more convenient way for programmers to visualize that data. But you can still easily manipulate the individual bits. Several 6502 instructions can be used for manipulating bits (AND, ORA, EOR, ROL, ROR, ASL, LSR, and so on). In your ASM code, just like you use "$" to represent hex values you can use "%" to represent binary numbers.
blargg wrote:
Kasumi, that only works if you need a range that's a power of 2. It's more involved if you need another range, especially if it needs to be uniformly distributed.
So far I never needed uniformly distributed random numbers in a range that wasn't a power of 2, so I just truncate the numbers in the easiest possible way that will make it fit in the range I want.
I can't think of a way to uniformly distribute random numbers in arbitrary ranges that doesn't involve at least a multiplication. For a maximum range of 0-255 I'd probably do something like ((upper limit + 1) * 8-bit random number) / 256. The division would be accomplished by simply ignoring the lower bits of the multiplication result. But now that I think of it, it wouldn't be very uniform... because of rounding some numbers would be selected twice...
One thing you can do, and which is done in four projects of mine (LJ65 for NES, FK Convey for PC, Lockjaw for PC/GBA/DS, and something unreleased for NES), is feed the power-of-two PRNG through a least-recently-used queue. Put the numbers in a queue, select one of the members at the front of the queue at random, and rotate it to the back of the queue. This also avoids generating the same number several times in a row, if your game rules consider repetition to be a problem.
Edit: Somewhat ninja'd by Tepples.
blargg wrote:
Kasumi, that only works if you need a range that's a power of 2. It's more involved if you need another range, especially if it needs to be uniformly distributed. Maybe you can adjust your algorithm so that it works for values in a power-of-2 range.
True, but he did specifically ask for something that would give him 16 different values.
If you want something that gives you a different range you could try the method Tepples uses for his Tetris randomizer which I think is pretty clever. (At least, I think. I haven't checked the source code, but he described it this way once, I believe)
Seven pieces. He used 3 bits. Seven of the possibilities deal the piece it corresponds to. (1= I, 2 = O, etc.) When the eighth possibility comes up he increments (or decrements) and wraps around a variable and deals the piece that corresponds to the current value of the variable.
Something like this:
Code:
Randseven:This routine will end with a random number 0-6 in the accumulator
jsr randomgen
and #%00000111
cmp #$07
bcc endrandseven
dec incvalue
bpl randseven.nowrap
lda #$06
sta incvalue
randseven.nowrap:
lda incvalue
endrandseven:
rts
This could work even if say one needed 12 values out of 16. Just change the and to add another bit as 1, and change the cmp. The downside is that if you get multiple numbers greater than or equal to the cmp value in a row, they'll be decremented by one which isn't very random. But I'd probably cut my losses at that point. To combat this you could use a variable for each of the extra numbers, or I could try to design something else.
edit: or you could use a queue as described by Tepples above.
Kasumi wrote:
Seven pieces. He used 3 bits. Seven of the possibilities deal the piece it corresponds to. (1= I, 2 = O, etc.) When the eighth possibility comes up he increments (or decrements) and wraps around a variable and deals the piece that corresponds to the current value of the variable.
I used this method, which I called "possession arrow", to simulate a uniform randomizer in versions of LJ65 (then called tetramino) prior to 0.33. In that version, I switched to the LRU system to reduce repetition, following Arika's TGM games that use a conceptually similar system.
@Tetris randomiser : just ace! I wish I could think like that
This question comes up a lot, so I've thought about it some.
I'm not sure I'm analyzing it correctly (and I haven't actually
tested it so take this with agrain of salt
)
My thought is to take a random number MOD your range and then
accumulate and hope that the bias accumulates and distributes
itself over the range. (or the bias ends up "diluted" in the
accumulation, depending on how you look at it)
Choose a range of random numbers that's easy to MOD to your target
range.
So eg if you random number generator returns numbers 0-FF,
and you want random numbers in the range 0-1B mask off the lower
five bits (the next higher power of two, and if you start with
a range that's a power of two and MOD for a range thats a power
of two that shouldn't introduce any bias)
MOD that to your target range
Add it to your random number accumulator
MOD that to your target range.
something like this:
Code:
; return a number in the range 0-1B
; start with a random number 0-FF in a
; myrand needs to be intialized to be in range
; MOD to an easier power of two
and #$1F ; the assumption is if a is random in a range
; that's a power of two this result should be random
; MOD that to the target range
cmp #$1C
bcs ACCUMULATE
sbc #$1B ; biased to account for carry clear
ACCUMULATE
clc
adc myrand
; MOD to the target range
cmp #$1C
bcs SAVE_MYRAND
sbc #$1B ; biased to account for carry clear
SAVE_MYRAND
sta myrand
rts
Edit:
I forgot to mention you could use the next lesser power of two
to start and save some MODing.
My gut says that the larger the range you start with the better
off you'll be distribution wise.
So it becomes something of a trade off between the size of the range
and the effort spent doing the MOD.
I'd guess that using the next lesser power of two probably
wouldn't make a lot of difference especially if it's close to
your target range.
Here's one I came up with for use to generate a random value between 0-79, with uniform distribution (for use in Litewall, which has a 10x8 grid it wants to generate random coordinates into):
Code:
random_80:
@retry:
jsr random_256 ; A = random 8-bit value
sec
sbc #80
bcc @done
sbc #80
bcc @done
sbc #80
bcs @retry
@done:
adc #80
rts
It divides by 80 and takes the modulus. This takes care of the first 240 values, leaving 16 extra. In that case, it gets a new random value and tries again. The probability of retrying the first time is 16 in 256, or 1 in 16. The probability of it retrying two times in a row is 1 in 16*16 = 1 in 256, and so on. This means that on average, it takes only slightly longer than random_256, but worst-case, it could take forever, assuming random_256 has an infinite period. If random_256 is a simple 8-bit LFSR, this routine will always terminate, since random_256 would then generate each value every 256 calls.
Or in C:
Code:
unsigned char rand80() {
unsigned char raw;
do {
raw = random_256();
} while (raw >= 240);
return raw % 80;
}
blargg wrote:
If random_256 is a simple 8-bit LFSR, this routine will always terminate
In a video game or a real-time demo, you want the RNG to terminate within a guaranteed amount of cycles so that you don't overflow frame time. In this case, I'd recommend something similar to possession arrow logic for cases where the base RNG returns 240-255: increment a counter by 16 (mod 80) and then combine that with the low bits of the remainder. Or 3 bits for Y and 3 bits plus an LRU unit for X.
Sorry for the necro post, but I didn't see another thread on the topic.
I want to produce a large number of random sequences of the numbers 0-15 where each number must occur exactly once. A 4-bit LSFR would make the sequence predictable very quickly, and an 8-bit LSFR (which would be much less predictable) doesn't guarantee that any 4 bits I choose (the last 4 out of 8, for example) do not have the same combination of values in a 16 values long sequence, as far as I know.
One solution could be to run fisher-yates on an array of the numbers I want, where the random number (i.e. the index) is taken from the last 4 bits of an 8-bit LSFR (picking the next value or truncating it should it be out of range) that is periodically seeded by the unknowing player by counting the number of frames passed between certain actions (in this case, pulling the trigger on the Zapper). That should give pretty good randomness, but I could be wrong, and I don't even know whether there's a way I could prove myself right or wrong.
Another solution could be to keep a table of sequences that are chosen from by the aforementioned frame counter, but that seems like a pretty bad idea considering, uh, [grabs a calculator] there are nearly 21 trillion different possible sequences and each sequence would need 8 bytes of memory for storage. So yeah, very bad idea.
So... does anyone feel right away that the 8-bit LSFR + Fisher-Yates solution is also a bad idea, for any reason? In particular, is anyone good enough at math to tell me whether or the sequences that are produced by this method are equally likely to happen, or whether there's a tendency to prefer a particular set of sequences? The delay between trigger pulls is likely to have some kind of normal distribution around 60 frames or so, but other than that... not sure whether I should investigate further or just try it and see how well it works. It's going to be in silico either way.
What I'd use is a 16-bit LFSR (use Greg Cook's CRC16 from 6502.org) feeding Fisher-Yates.
Is there any reason you can't do a straightforward shuffle of 16 values in RAM? Write an array with 0-15, then step through each of the 16 entries and swap it with a random entry from the array.
How is a "straightforward" shuffle different from Fisher-Yates? It swaps with any value, not just the remaining ones? How does that ensure I don't get more than one of the same value in the sequence?
EDIT: Oh, wait. I'm stupid. Sorry.
Now I'm really confused. When swapping a[i] with a[j] in an array of length N, what is the purpose of having 0 <= j <= i (as in Fisher-Yates) when it might as well be 0 <= j <= N-1 (as blargg suggests)? It should have something to do with breaking the (un-)bias, but I don't know for sure.
In theory, there's a bias in the case of 0 <= j < N. Proof: There are N! distinct permutations after the algorithm. With 0 <= j <= i, there are also N! distinct paths to these permutations. But with 0 <= j < N, there are N^N equally distributed paths. Because N! does not generally divide N^N (example: N = 3; 3! = 6 does not divide 3^3 = 27), these states don't all have the same number of paths to them; therefore bias.
But in practice, I dare someone else to prove that this bias is any more noticeable than the limitation that a CRC16 can generate only 65536 of the 2.1×10¹³ distinct permutations.
I drew a trinary tree of possibilities for the array of [1, 2, 3] (i.e. length 3) and came up with 27 leaves, distributed like so:
123 x 4
132 x 5
213 x 5
231 x 5
312 x 4
321 x 4
I came up with a more elegant explanation, but tepples beat me to it.
Anyway, in this particular case, as tepples suggests, I think I'll run with blargg's version, because it's much easier to implement, and because the bias isn't going to matter, especially since the input will be shuffled from last time anyway.
Thanks, guys!
I used ARCFOUR, running several times per frame when nothing else is doing, and including the microphone if it is available.
This is not exactly NES stuff but in my MD game "Glass Breaker" I used this to create some garbage for random enemy business :
Code:
MOVE.W (GARBAGE), D0
MOVE.W D0, D1
NOT.W D1
ROR.W #3, D0
EOR.W D1, D2
ADD.W D1, D0
ROL.W #7, D1
ADD.W D1, D0
EOR.W D2, D0
MOVE.W D0, (GARBAGE)
It takes quite little time and provided me adequate enough output for the things I needed.
I did not really spend much time on, just arranged some instructions until I got satisfying result...