In this post, infiniteneslives mentioned the possibility of creating a library that makes a file system out of the 4K blocks on a flash cartridge. This would require running code from RAM, and this code should be tested.
The file system would be log-structured, separated into at least four clusters in a ring structure. All writes would happen in the head cluster, with a blank cluster after the head. Each file would consist of a user ID (usually the number of the game on a multicart), a slot ID (to identify a particular file owned by the user; I can think of a legit reason for about 100 save slots), a size in bytes, and a CRC of the header.
Reading a file would involve these steps, given a user ID, a slot ID, an address in RAM, and a size in bytes:
Writing a file would take these steps:
To defragment:
A menu program could finish any interrupted defragmentation while booting (displaying a message similar to that displayed by mainstream operating systems while fscking) and, if a whole bunch of outdated files are present, proactively defragment in the background while the user is selecting an activity to run.
Now for testing: I do have a first-generation kazzo that I've used to dump a few NES carts, but I haven't tested it with my current PCs, which run Windows 7 and Xubuntu 12.04 LTS. Before I start, I'll need a USB serial bootloader cable and a mapper 28 board with 8K SRAM. Then I'll need to get the cable and my kazzo working again, then get sector erase and byte programming working. Then come April, once all the compo entries are in, I can make a prototype implementation using four 1024-byte sectors in $7000-$7FFF to make sure the logic is correct, then extend it to flash sectors.
The file system would be log-structured, separated into at least four clusters in a ring structure. All writes would happen in the head cluster, with a blank cluster after the head. Each file would consist of a user ID (usually the number of the game on a multicart), a slot ID (to identify a particular file owned by the user; I can think of a legit reason for about 100 save slots), a size in bytes, and a CRC of the header.
Reading a file would involve these steps, given a user ID, a slot ID, an address in RAM, and a size in bytes:
- Copy code to do the following steps to the "overlay zone" in PRG RAM.
- Scan backwards through all clusters to find the most recently written version of that file.
- Copy that file to the destination.
Writing a file would take these steps:
- Copy code to do the following steps to the "overlay zone" in PRG RAM.
- While the head cluster is not big enough to hold the file, perform a defragmentation.
- Append the file to the current cluster.
To defragment:
- At this point, the cluster after the head is blank. The next cluster that is not blank is the tail.
- Find files in the tail that are the latest version and copy them to the head. When the head fills up, move the head to the new cluster.
- Erase the tail cluster.
A menu program could finish any interrupted defragmentation while booting (displaying a message similar to that displayed by mainstream operating systems while fscking) and, if a whole bunch of outdated files are present, proactively defragment in the background while the user is selecting an activity to run.
Now for testing: I do have a first-generation kazzo that I've used to dump a few NES carts, but I haven't tested it with my current PCs, which run Windows 7 and Xubuntu 12.04 LTS. Before I start, I'll need a USB serial bootloader cable and a mapper 28 board with 8K SRAM. Then I'll need to get the cable and my kazzo working again, then get sector erase and byte programming working. Then come April, once all the compo entries are in, I can make a prototype implementation using four 1024-byte sectors in $7000-$7FFF to make sure the logic is correct, then extend it to flash sectors.