Mesen is a high accuracy emulator for Windows/Linux/Libretro - as far as I know, it passes more test ROMs than any other emulator currently available.
It has most features you would expect from an emulator (save states, online play, cheats, movies) and a lot of advanced options (Rewinding, overclocking, remove sprite limit, custom palettes, stereo effects, support HD packs, automatic updates, etc.). It also includes a very complete set of debugging tools (including Lua scripting).
Download:
https://www.mesen.ca/download.phpWebsite:
https://www.mesen.caDocumentation:
https://www.mesen.ca/docs/Source (GPLv3):
https://github.com/SourMesen/MesenReleases:
https://github.com/SourMesen/Mesen/releasesThe current version is 0.9.8 - released on June 23, 2019.
This is an excellent emulator! Tons of features. Sound is great. It handled everything I threw at it, except for one little bug I found (which you're likely already aware of)... got a hang in Battletoads on the second level.
Joystick emulation has bugs. Seems to not respect the strobe status, and always shifts bits out after the first write. I think you need to make reads not trigger shifts until the strobe bit is cleared.
Example joystick reading code that shows the bug:
Code:
ldx #1
stx $4016
lda $4016
dex
stx $4016
and #$FC
tax
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
cpx $4016
ror a
eor #$FF
Sounds cool, but...
Attachment:
mesen-error.png [ 27.09 KiB | Viewed 46047 times ]
Any idea of what could've gone wrong?
Works fine here on my Windows 10 x64. Nice work!
It references System.IO.Compression and System.IO.Compression.FileSystem, and those should come with .NET 4.0 or greater.
Dwedit wrote:
It references System.IO.Compression and System.IO.Compression.FileSystem, and those should come with .NET 4.0 or greater.
Thanks. Looks like I don't have 4.5 installed, which according to the site, is required. Will try to install it now.
Thanks for the replies!
Wasn't aware of the Battletoads freeze on level 2, thanks for letting me know, I'll look into it.
Dwedit - It does look like the code for this is incorrect, thanks!
tokumaru - Not sure if you've had any success with installing .NET 4.5, but that's most likely the source of the error. Mesen uses .NET 4.5-specific features to read/extract data from zip files, which is the first thing it needs to do when starting up.
This how it looked when it hung on Battletoads for me:
It happened when I moved all the way to the top of the screen. Doesn't necessarily mean it's related, but that's when it did this in any case. The game ran perfectly until this point.
Awesome emulator. Very impressive work.
As for Battletoads, many threads on this forum have been dedicated to that exact stage 2 freezing issue. The nature of the issue is not fully understood. Hopefully, further research will yield a new test ROM.
Sour wrote:
tokumaru - Not sure if you've had any success with installing .NET 4.5, but that's most likely the source of the error.
Yup, installing .NET 4.5 fixed it. It's a very nice emulator! The debug tools look promising.
zeroone wrote:
The nature of the issue is not fully understood.
Isn't it just a scrolling/sprite 0 hit issue?
tokumaru wrote:
Isn't it just a scrolling/sprite 0 hit issue?
Yes. And, it has been confirmed with trace logs. But, the test ROMs do not detect the problem. The details of the issue are not fully understood.
@Sour
Could you provide a zip download link? Antivirus software does not like downloading .exe files.
Can't you temporarily disable your Antivirus?
tokumaru wrote:
Can't you temporarily disable your Antivirus?
I can't
zeroone wrote:
Could you provide a zip download link? Antivirus software does not like downloading .exe files.
I figured this would end up happening!
I added a link to a zipped version in my first post. The website now links to the .zip version as well.
As for Battletoads, the freezes look like they are caused by the code enabling rendering on scanline 14, cycle 255, right after resetting the scrolling values. This triggers the vertical scrolling increment at cycle 256 one scanline too early, sprite 0 hit doesn't trigger because the background is drawn one pixel too high, and the game locks up. During normal gameplay on level 2, the writes to $2001 never seem to occur before cycle ~261.
Haven't been able to find an actual solution yet - everything that appears to fix Battletoads (or at least, the specific freeze I managed to record in a save state) makes one or more test roms fail.
Thanks to everyone who tried it out, very much appreciated!
Sour wrote:
I added a link to a zipped version in my first post. The website now links to the .zip version as well.
Thanks.
I got it running and I loaded SMB3. It seems to run at normal speed for a few seconds and then something goes wrong with the timing. The image is refreshed every few seconds, almost like it's keeping up in the background yet failing to show all the frames.
Here are the specs of my test box:
Windows 7 Professional
Service Pack 1
Dell
5.1 Windows Experience Index
Intel Core i5-2400 CPU @ 3.10GHz
8 GB RAM
64-bit OS
I never saw a problem like this with any other emulator thus far. It made me install VS2015 64-bit runtime before starting up. Maybe I need to bounce or something?
Edit: I tried a few more games. I can tell from the music and sound effects that the emulator is running, but it's definitely failing to render to the screen. It's not a mapper specific issue; all games seem to have this issue on my box.
zeroone wrote:
I got it running and I loaded SMB3. It seems to run at normal speed for a few seconds and then something goes wrong with the timing. The image is refreshed every few seconds, almost like it's keeping up in the background yet failing to show all the frames.
Can you try displaying the FPS counter (F10) to see what it says? The first number is the number of frames emulated/sec, the second is the number of frames/sec actually sent to the video card.
e.g:
Normally this would be 60/60.
If you set the emu to "Max speed" (F9) , it might go up to, say, 300/60.
If you disable vertical sync, both numbers should be relatively similar. i.e 300/300.
Disabling vertical sync might change something. (it's in the video options)
Also, could you try running this utility?
https://technet.microsoft.com/en-us/sys ... 97568.aspxThe "Current timer internal" shown by ClockRes is set to 1ms on my computer - might be different on yours, which could be part of the problem, not quite sure.
Sour wrote:
Disabling vertical sync might change something. (it's in the video options)
That did the job. With vertical sync disabled, it plays fine.
I've updated the download to version 0.1.1, and pushed an update for it, too.
Technically, people who have 0.1.0 should get an update prompt when running 0.1.0 (if you didn't disable automatic updates in the options)
If any of the people who tried 0.1.0 still have it and could let me know if the upgrade process works correctly, that'd be great.
Changes in 0.1.1:
-Support for Arkanoid paddle/controller
-Debugger can now display/step through code that's being executed in cpu/cartridge ram
-Fixed (hopefully) the standard controller input bug mentioned by Dwedit.
-Fixed the Battletoads freeze issue (or at the very least, I can't seem to reproduce it anymore in level 2)
-Fixed Netplay (didn't work at all in 0.1.0 because of a silly mistake, whoops)
Thanks to everyone for the bug reports and trying the emulator out!
Sour wrote:
-Fixed the Battletoads freeze issue (or at the very least, I can't seem to reproduce it anymore in level 2)
Can you share you findings on this? Several others are experiencing the same issue.
When I started up Mesen, it prompted me to upgrade, which resulted in this:
zeroone wrote:
tokumaru wrote:
Can't you temporarily disable your Antivirus?
I can't
For the record, which AV is it, and what exact error does it give when it blocks a download? In fact, how do you download any other app? Or does AV complain only about those .exe files that aren't produced by an organization with a valid software publisher certificate issued by a Microsoft-certified Authenticode CA? Are you not a member of the Administrators group on the PC you use? Or is it behind a web proxy that intercepts all HTTP connections and scans all files?
I ask because other people who distribute test builds of emulators will likely have to put up with the same limits for other testers who use your AV product.
The "fix" for battletoads is probably incorrect in some way or other. Basically, if rendering was disabled and is enabled on tick 255, the vertical scroll increment at cycle 256 will not be done (basically I added a 1 ppu cycle delay on the effect of the rendering flags in relation to the scrolling increments), which solved the tiny timing issue in my case, but it's unlikely the real hardware behaves this way.
As for the upgrade crash, I have a feeling this may be AV related as well. If you check your documents folder, there should be a mesen subfolder that contains MesenUpdater.exe - this file is probably blocked or missing on your computer?
tepples wrote:
For the record, which AV is it, and what exact error does it give when it blocks a download? In fact, how do you download any other app? Or does AV complain only about those .exe files that aren't produced by an organization with a valid software publisher certificate issued by a Microsoft-certified Authenticode CA? Are you not a member of the Administrators group on the PC you use? Or is it behind a web proxy that intercepts all HTTP connections and scans all files?
I ask because other people who distribute test builds of emulators will likely have to put up with the same limits for other testers who use your AV product.
It's Symantec Endpoint Protection. It deletes the .exe shortly after download. In fact, Chrome popped up a warning saying not to download .exe files and it said that this particular .exe file was rarely downloaded (it's a newly posted file). Symantec even interfered when I tried downloading it with wget via Cygwin. I had no firewall issues. However, I do not have admin rights to this test box.
I was able to download and extract a .zip containing the .exe without an issue. The auto-update might be failing due to lack of admin rights or Symantec. Personally, I prefer .zip files anyway. I like to manually scan downloaded apps for malware (even though anyone on this forum could just create a custom executable that kills my machine). And, I prefer not to auto-update if I have something that works. Think about how annoying auto-update is in Windows.
Sour wrote:
The "fix" for battletoads is probably incorrect in some way or other. Basically, if rendering was disabled and is enabled on tick 255, the vertical scroll increment at cycle 256 will not be done (basically I added a 1 ppu cycle delay on the effect of the rendering flags in relation to the scrolling increments), which solved the tiny timing issue in my case, but it's unlikely the real hardware behaves this way.
Others have suggested similar patches. I'm still hoping someone creates a new test ROM to fully vet this.
Sour wrote:
As for the upgrade crash, I have a feeling this may be AV related as well. If you check your documents folder, there should be a mesen subfolder that contains MesenUpdater.exe - this file is probably blocked or missing on your computer?
No subfolder was created. At least for testing, please continue to provide .zip links.
zeroone wrote:
It's Symantec Endpoint Protection. [...] In fact, Chrome [...] said that this particular .exe file was rarely downloaded (it's a newly posted file).
Thank you for providing enough information to allow research.
My research shows that "Safe Browsing"-type things are less likely to trigger if the publisher of an application follows the following steps. Two should be free of charge; two require a periodic payment to a certificate authority.
- Offer the download through HTTPS. HTTPS is HTTP tunneled through TLS (Transport Layer Security), formerly called SSL (Secure Sockets Layer). TLS requires a valid TLS certificate, which is an X.509 certificate certifying that a certificate authority (CA) has verified that the owner of the private key corresponding to a particular public key controls a particular domain. Domain-validated TLS certificates are available without charge from StartSSL, WoSign, and Let's Encrypt; organizationally validated ones cost more. If you have a VPS, you can install it at no extra charge; if you have shared hosting, you'll have to have your hosting provider install the certificate (for StartSSL or WoSign) or an ACME client (for Let's Encrypt) on your behalf. I had to go through StartSSL because my web host has not yet installed an ACME client, despite two duplicate questions on its Stack Exchange knockoff to do so (1 | 2).
- Build a history of downloads by users of the same browser of executables hosted on the same domain. This is the most important step for Google Chrome's Safe Browsing feature.
- Digitally sign and timestamp each executable file with a valid Authenticode software publisher certificate. A software publisher certificate allows earned reputation to leak into other executables from the same publisher, such as new versions of a program. But unlike TLS certificates, Authenticode certificates aren't available without charge because there's no counterpart to a domain-validated certificate.
- Build a history of downloads by users of the same browser of executables signed with the same Authenticode certificate. This is the most important step for the SmartScreen feature of Internet Explorer. And in Windows 8 and Windows 10, even zipped executables are subject to SmartScreen. If the certificate that you bought in step C is an Extended Validation, Internet Explorer will let you skip this step.
- Submit the executable to Symantec for whitelisting prior to release.
Sources:
Google Chrome Help Forum; Google
internet explorer smartscreen;
Adding software to the Symantec WhitelistBut then Symantec has a conflict of interest here, as it is also an Authenticode certificate authority.
Quote:
However, I do not have admin rights to this test box.
Have you requested that your administrator add executables or web sites to the whitelist? If so, what reason was given for denial?
Quote:
I was able to download and extract a .zip containing the .exe without an issue.
I guess that's a valid workaround for you, but looking forward, I can see that it might not be a valid workaround for users of SmartScreen on Windows 8 and Windows 10.
Quote:
And, I prefer not to auto-update if I have something that works. Think about how annoying auto-update is in Windows.
But how do you know it works? What if the emulator has a bug that allows a ROM to escape from the emulator and run native code as the user, as ZSNES's SA-1 support is known to have? Does something "work" if it is unsafe in this manner?
zeroone wrote:
No subfolder was created.
There should definitely be a folder, otherwise the emulator wouldn't run at all. There's a button to open it in the emulator - in Options -> Preferences -> Open mesen folder
There should be a few subfolders and files in it, including MesenUpdater.exe
Hi folks,
I just released an updated version of Mesen (0.1.2)
The website has also been updated (now available in both French & Japanese, like Mesen itself).
If you try it out, let me know if you find any issues.
Thanks!
-----
Changelog:
New Features
UI: Mesen is now available in English, French and Japanese.
Compatibility: Added basic support for VS Unisystem games. (Mapper 99)
Compatibility: Added support for mapper 82 and 241.
Audio: Added customizable fake stereo effects.
Audio: Added option to swap Square 1 and Square 2's duty cycles.
Bug Fixes
Auto-updates: Fixed bug that caused auto-updates to fail (MesenUpdater.exe was missing)
Sorry for the bump!
I just updated Mesen to 0.1.3 to fix some startup crashes that were introduced in 0.1.2.
Also, Mesen is now open source:
https://bitbucket.org/Souryo/mesen (Also updated the first post to mention this)
Parts of the code are not as clean as I would like, but it'll have to do for now!
Hi,
SourYou said
Quote:
it passes more test ROMs than any other emulator currently available
and I have to disagree. puNES does better, at least on my set.
So, the tests that it failed:
- ppu_sprite_hit/09-timing.nes
#4 Flag set too late for upper-left corner - ppu_sprite_overflow/03-timing.nes
#4 Flag cleared too late at end of VBL - test_apu_timers/dmc_pitch.nes
Sounded noticeably wrong - tvpassfail
No real surprise here I guess - test_apu_m
My self-written tests (you can find my post somewhere on this forum). Not all of them are ok due to sync problems, but the ones I've attached should never fail on NES
I don't have a NES right now and can't check this on a real hardware, but I am pretty sure the tests should be correct. And I don't really remember what my tests were about, so don't ask me)
Hey,
Thanks for testing, I did not have any of these tests in my sets.
I was using similar sprite hit & overflow tests, which did pass, but the ones you mentioned don't.
Although, I just tested on the latest puNES release and these 2 tests also fail? They also seem to fail on my potentially out-of-date Nestopia and fairly recent copy of Nintendulator.
I imagine the tests pass on a real NES, but I do not have any way to personally verify this...
Your 3 tests do pass on puNES & Nintendulator, so I'll definitely take a look at those and the DMC test, too.
Thanks again!
P.S: tvpassfail does pass, if you enable the NTSC filter :)
Uh... What do tests 9,10,11 really do? Care to describe them???
Zepper wrote:
Uh... What do tests 9,10,11 really do? Care to describe them???
Well...
x0000 wrote:
And I don't really remember what my tests were about, so don't ask me)
I mean, I wrote these tests more than 3 years ago, and I don't have any notes left since then. And I bet even if they existed I would be still unable to tell their purpose.
If I remember correctly all tests checked proper length counter decrease overlapping with writes to 0x4017, but how exactly? Here what I can guess from the listings:
1-4 check if 0x4017 write causing additional HalfFrame/QuarterFrame signals mess up normal decrease
5-8 same in 0x80 mode
9-10 check if lfsr resetting after 0x4017 write mess up normal decrease
11 no idea.
Found this in my correspondence with FHorse
x0000 wrote:
From wiki:
Quote:
After 3 or 4 CPU clock cycles*, the timer is reset.
Thats true, but if you count clock cycles as ~W goes low, write commit is actually on ~W goes high. Anyway, It tells that timer is reset
after 3 or 4 clocks.
Quote:
Note: Writing to $4017 with bit 7 set will immediately generate a clock for both the quarter frame and the half frame units, regardless of what the sequencer is doing.
Thats not true at all. First of all, half frame and quarter frame can be generated only on odd cycles. But for 0x4017 it is actually generated on reset_cycle - 1, because reset (as well as any other change of framelfsr) can happen only on even cycle. But if both frame counter and 0x4017 generates half_frame, quarter_frame there still would be only one clock committed.
If someone can explain what I meant there please tell me I am curious too.
Like the Wiki says, writing to $4017 has a delayed effect (applied 3 or 4 cycles later, to correspond with an odd cycle?).
Writing with bit 7 set to $4017 clocks the half/quarter frames, but does so after the 3/4 cycles delay (this wasn't implemented in my emu).
Additionally, it looks like if the frame counter is reset at or around the same time it clocks the half/quarter frames through its normal process, the extra frame clock that is normally caused by a write to $4017 (with the high bit set) does not occur.
Implementing both of these behaviors makes Mesen pass all of these tests (1 to 11) - although I am unsure of how accurate my implementation is vs the actual NES.
Thanks for making those tests!
It cannot be 3-4 cycles delay, it is either 1-2 or 2-3 cycles depending where you count from. Half-frame and quarter-frame signals should happen before lfsr resetting. Or am I wrong?
Hey, just a small update, I just released Mesen 0.1.4.
It fixes all the tests mentioned by x0000 as failing on Mesen:
ppu_sprite_hit/09-timing.nes
ppu_sprite_overflow/03-timing.nes
test_apu_timers/dmc_pitch.nes
test_apu_2 - tests 1 to 11
It also adds support for mapper 15 & 60.
Thanks for the bug reports, x0000 - if anybody knows of any tests or games that fail on Mesen, please let me know!
Another update released - version 0.2.0.
This update adds a lot of video filters (xBRZ, Scale2x, HQX, 2xSai, Super2xSai, SuperEagle), and options for the NTSC filter and configuration common for all filters (Brightness, Hue, Saturation, Scanlines, etc.)
It also adds Google Drive integration - which lets you put your save games & save states in your Google Drive account (to keep an automatic backup and/or sync them across multiple computers).
A number of bugs (crashes, etc.) were fixed as well, and Mesen no longer depends on the MSVC++ 2015 runtime being installed to work - its only requirement is having .NET 4.5.
If you try it out, let me know what you think!
Sour wrote:
if anybody knows of any tests or games that fail on Mesen, please let me know!
Try out The Incredible Crash Dummies and Kick Master, many of emulators have difficulties running them (although it is basic A12 counter).
Mesen is really interesting emulator with many settings and "lots-of-stuff".
I've found a bug in 0.2.0 beta: "Battletoads (U)" and "Battletoads and Double Dragon (U)"
hangs in dendy-mode, this is wrong behaviour.
The Kick Master issue was caused by a race condition (write to $2006 at the same time as the cycle 256 Y scrolling increment). The game expects the $2006 value to win and the scrolling increment to be ignored (but it wasn't), which produced the weird effect on the title screen.
The Incredible Crash Dummies is apparently not an MMC3 game, but uses the MC-ACC chip which has a slightly different IRQ behavior. I implemented the behavior for it and it fixes the game, but this currently relies on proper NES 2.0 headers (submapper 3) since I'm not sure I want to start putting hash checks into Mesen..
As for Battletoads freezing in Dendy mode: Dendy mode was incorrectly giving 21 scanlines worth of vblank after firing the NMI, instead of 20, which caused Battletoads to freeze. Both games seem to work correctly now.
Thanks to the both of you for the bug reports!
Incorrect APU volume sliders behavior, like fceux has:
https://sourceforge.net/p/fceultra/bugs/710/puNES also had same problem, but it was fixed in 0.95.
Sour wrote:
The Kick Master issue was caused by a race condition (write to $2006 at the same time as the cycle 256 Y scrolling increment). The game expects the $2006 value to win and the scrolling increment to be ignored (but it wasn't), which produced the weird effect on the title screen.
That's what
I had reported without success. What a shame... How did you fix it? It affects Mega Man 5 too.
Eugene.S wrote:
Incorrect APU volume sliders behavior, like fceux has:
https://sourceforge.net/p/fceultra/bugs/710/puNES also had same problem, but it was fixed in 0.95.
Thanks, this is fixed as well.
Zepper wrote:
That's what
I had reported without success. What a shame... How did you fix it? It affects Mega Man 5 too.
In my case, the second write to $2006 occurs after cycle 255, and right before cycle 256 runs. I changed the code so that a write to $2006 that updates the vram address and occurs at that point (post 255, pre 256) will cause the PPU to ignore the Y scrolling increment on cycle 256, preserving the new value that the CPU has written instead (
Commit). This fixes Kick Master in my case, and is the only fix I found that did not break any test rom.
Just a small update, Mesen 0.2.1 is out.
0.2.1 fixes a number of issues:
-PPU race condition that caused a display glitch on the opening screen in Kick Master
-Properly implements MC-ACC IRQ behavior (fixes Incredible Crash Dummies screen shaking)
-Fixes a bug with the triangle channel at low frequencies
-Fixes a bug with the sound mixer when setting low volumes for individual channels
-Fixes PPU timing bugs for both PAL & Dendy
It also adds a few new features:
-Ability to record audio to .wav files
-Added support for a number of NES 2.0 submappers
-Added a few features in the debugger, and fixed a few bugs.
Just a heads up: 0.2.1 breaks compatibility with save states from previous versions.
Thanks to x0000 & Eugene.S for the bug reports!
As always, if you find any bugs, please let me know! :)
Another small update: 0.2.2 is out.
New stuff:
-CPU overclocking/underclocking (CPU+APU, or only CPU, which allows overclocking without altering sound pitch)
-Mapper 19/69/210 support
-VRC6, MMC5, Sunsoft 5B and Namco 163 expansion sound support
-And a couple of small items (AxROM 4-bit PRG select, NES 2.0 CHR ram size field support, MMC5 bugfixes)
And another update: 0.3.0
This one adds a fair amount of stuff:
-NSF/NSFe support that turns the emulator into a mini music player:
-7-Zip support, and support for compressed archives containing multiple roms
-Built-in game database to auto-correct iNES headers (a mix of Nestopia's and NesCartDB's databases)
-Support for 24 more mappers, bringing the total to over 130.
-Log window tool to view info about the loaded roms.
-VS System zapper support and fixes for freezes in RBI Baseball, TKO Boxing and Super Devious.
-A number of bug fixes related to the debugger.
Let me know if you find any problems or have any feature requests.
Sorry for bumping this again!
0.3.1 is out:
-VRC7 sound support
-MMC5 vertical split mode support
-Input devices (i.e controller, zapper, four player adapter) are now automatically selected when loading a recognized rom
-Improved support for VS System (automatically selects the proper PPU palette and input mappings)
-Plus a few bug fixes and small improvements
Haven't tried it out yet, but...
1) bumping isn't nearly as looked down here as it is on other forums.
2) I'm pretty sure a new release of your emulator counts as a good reason to bump the thread.
A couple of updates - 0.3.2 and 0.4.0 have been released.
Updates:
-Added support for 46 more mappers
-DirectInput devices are now supported
-UI is now also available in Spanish & Russian
-Controller button state can now be shown on the screen
-Fixed a DirectX crash that caused the emu to crash on some computers
-Fixed an audio bug with the square channel sweep unit
-Improved open bus implementation (passes "allpads" test)
Just wanted to say thanks for all the hard work! There's a lot of NES emulators out there so standing out is hard, but I'm very impressed by this so far and see a lot of potential
thanks again
MP2E wrote:
Just wanted to say thanks for all the hard work! There's a lot of NES emulators out there so standing out is hard, but I'm very impressed by this so far and see a lot of potential
thanks again
Thanks! It's always nice to hear what people think of the emulator.
On a side note, version 0.4.1 is out :)
Does anyone know where Mesen sets up its directories for save game battery data and configuration files? Does any know if these directories are customizable? I'd like to set it up so the emulator and all it's save data are portable and I can just keep it all in one directory. This way I can throw everything on a flash drive for when I am on the go.
By default, Mesen will keep all of its data in a "Mesen" subfolder in your "Documents" folder.
You can turn on "portable" mode by appending _P to the .exe's name. i.e: Mesen_P.exe
This will make Mesen save all of its data in a "Mesen" folder in the same folder as the .exe itself (which should make it work from a USB key, but I can't say I've ever tried!)
Sour wrote:
By default, Mesen will keep all of its data in a "Mesen" subfolder in your "Documents" folder.
You can turn on "portable" mode by appending _P to the .exe's name. i.e: Mesen_P.exe
This will make Mesen save all of its data in a "Mesen" folder in the same folder as the .exe itself (which should make it work from a USB key, but I can't say I've ever tried!)
Thank you. Lemme kiss your face. MAWH!
I'm in portable mode now. I don't see where Mesen is storing it's cheat files such as Game Genie and Pro Action Replay. Do does anyone know where those are saved to and what formats Mesen accepts. I have a few cheats packs I would like to import if possible.
The cheats are stored in the settings.xml file along with everything else.
Mesen doesn't support any cheat file format (yet), I'm aware of the .CHT files FCEUX uses, and I think Nestopia has its own XML format for cheats as well?
Is the cheat pack you're hoping to import in either of these formats, or does another format exist as well?
Being able to import/export cheats is on my list of things to do, but I haven't gotten around to doing it yet.
Sour wrote:
The cheats are stored in the settings.xml file along with everything else.
Mesen doesn't support any cheat file format (yet), I'm aware of the .CHT files FCEUX uses, and I think Nestopia has its own XML format for cheats as well?
Is the cheat pack you're hoping to import in either of these formats, or does another format exist as well?
Being able to import/export cheats is on my list of things to do, but I haven't gotten around to doing it yet.
I download the cheat files from
http://gamehacking.orgThe site lets you download the cheats in several different formats. I was using PuNES which used .CHT files and was hoping to import them all although having to redownload them in a different format wouldn't be too much of a pain.
uVSthem wrote:
I download the cheat files from
http://gamehacking.orgThe site lets you download the cheats in several different formats. I was using PuNES which used .CHT files and was hoping to import them all although having to redownload them in a different format wouldn't be too much of a pain.
The latest version (0.4.2) lets you import FCEUX/Nestopia cheat files (CHT/XML) as well as export them back to XML (Nestopia format).
The cheat list's UI changed quite a bit, and I added a built-in cheat DB with over 10k cheats that you can easily import cheats from as well.
Just a heads up though, these changes required me to erase any existing cheats (so upgrading to 0.4.2 will remove any cheats you had setup in Mesen)
Sour wrote:
uVSthem wrote:
I download the cheat files from
http://gamehacking.orgThe site lets you download the cheats in several different formats. I was using PuNES which used .CHT files and was hoping to import them all although having to redownload them in a different format wouldn't be too much of a pain.
The latest version (0.4.2) lets you import FCEUX/Nestopia cheat files (CHT/XML) as well as export them back to XML (Nestopia format).
The cheat list's UI changed quite a bit, and I added a built-in cheat DB with over 10k cheats that you can easily import cheats from as well.
Just a heads up though, these changes required me to erase any existing cheats (so upgrading to 0.4.2 will remove any cheats you had setup in Mesen)
Sounds good, thanks.
I’ve encountered a bug and I have a feature request. The bug is that whenever I try to exit the emulator in full screen with a rom loaded I get a “Mesen (beta) is not responding” error. As for my request, I would like to be able to select the screen resolution will in full screen mode. Right now if I am correct full screen mode uses whatever resolution the desktop is set to. If also I could open the emulator and go straight into full screen mode without having to select every time that would great as well.
uVSthem wrote:
As for my request, I would like to be able to select the screen resolution will in full screen mode. Right now if I am correct full screen mode uses whatever resolution the desktop is set to.
Excuse me while I help the developer triage this request:
I thought sticking to the device's native pixel resolution was best for fixed-pixel displays such as LCDs and plasma panels. Are you using a CRT? Or do you have a very slow video card that can't upscale in hardware?
I have my PC connected to a 4K TV. For whatever reason the hardware upscaling on my Nvidia card is bugged and when I set it to upscale it only upscales to 4K at 30p instead of my TV’s native 4k at 60p. Besides, sending a signal in a display’s native resolution should always produce better image results than upscaling.
Then why not just run the emulator in 3840x2160 and tell the emulator to enlarge the video by a factor of 9?
Or are you trying to say that your video card is incapable of outputting 3840x2160 at 60 fps? I know 3840x2160 resolution on HDMI 1.x tops at out 30 fps. Is the issue that your video card doesn't support HDMI 2? Does the mouse cursor lag when you run 3840x2160 on your desktop?
"Besides, sending a signal in a display’s native resolution should always produce better image results than upscaling."
But you're asking to add a feature to send a signal at a resolution other than the display's native resolution.
I can run 4k at 60 no problem. My TV and video card is HDMI 2.0. The issue is Mesen runs at whatever resolution the desktop is set to. I can switch my desktop resolution to 4k and then Mesen will be at 4k however I was hoping of having Mesen having its own resolution setting so wouldn't have change the destop whenever I open and close the emulator.
Can you not connect the TV as a secondary monitor & set its resolution independently of your monitor's resolution?
I have a 2k computer monitor and a 1080p projector hooked to my computer, and both devices operate at different resolutions without any upscaling being applied.
Thanks for the crash report, I can't seem to reproduce it on my end though.
Does it really only happen when you exit Mesen while in fullscreen mode on your end? I can't really see why fullscreen mode would cause this kind of crash.
There are a few debug switches you can use on Mesen (/novideo, /noaudio, /noinput) that disable each corresponding feature, could you try if running with all or some of these prevent the crash when exiting? This would help me get a general idea of where the bug might be.
As far starting in fullscreen mode, this is already supported :)
Just use the /fullscreen command line option and Mesen will start in fullscreen mode.
Sour wrote:
Can you not connect the TV as a secondary monitor & set its resolution independently of your monitor's resolution?
I have a 2k computer monitor and a 1080p projector hooked to my computer, and both devices operate at different resolutions without any upscaling being applied.
Thanks for the crash report, I can't seem to reproduce it on my end though.
Does it really only happen when you exit Mesen while in fullscreen mode on your end? I can't really see why fullscreen mode would cause this kind of crash.
There are a few debug switches you can use on Mesen (/novideo, /noaudio, /noinput) that disable each corresponding feature, could you try if running with all or some of these prevent the crash when exiting? This would help me get a general idea of where the bug might be.
As far starting in fullscreen mode, this is already supported
Just use the /fullscreen command line option and Mesen will start in fullscreen mode.
I only have one monitor. I attached a photo of the error I am getting. So far I have only been able to get it when exiting in full screen. I'm running Windows 10 x64.
I did some more playing around with this crash I am having. It only happens when I have "Pause emulator when in background" checked.
Thanks! That made it a lot easier to figure out what was causing the issue.
I fixed the code, so the crash/freeze should be gone in the next version.
Version 0.5.0 has been released, which should fix the crash you were getting.
Let me know if you still get the same problem after upgrading.
The "Pause emulator when in background" exit bug appears to have been fixed. I found a new bug now while trying to exit with the google drive sync enabled. I attached a photo of the error. The only way I can exit is if I turn off the google drive sync feature.
Thanks - it looks like the zip file stored on google drive is corrupted, which is causing the crash.
You might be able to fix it by deleting Mesen from your authorized apps in your google account's settings and then setting up google integration in Mesen again.
I'm not quite sure how the file got corrupted in the first place, though.
Either way, I will change the code to ensure google drive's content is overridden with the local copy in the event it has become corrupted, which should prevent the crash from happening in the future.
Sour wrote:
Thanks - it looks like the zip file stored on google drive is corrupted, which is causing the crash.
You might be able to fix it by deleting Mesen from your authorized apps in your google account's settings and then setting up google integration in Mesen again.
I'm not quite sure how the file got corrupted in the first place, though.
Either way, I will change the code to ensure google drive's content is overridden with the local copy in the event it has become corrupted, which should prevent the crash from happening in the future.
I tried deleting the app from my google account setting. It didn't change anything.
uVSthem wrote:
Sour wrote:
I tried deleting the app from my google account setting. It didn't change anything.
It looks like Google Drive integration was simply broken - it tried downloading the data before ever uploading it a single time, which made it crash.
Should be fixed in 0.5.1, let me know if you still get problems with it.
Haven't bumped this for a new release in a while but.. I just released 0.6.0.
Changes for 0.5.2, 0.5.3 and 0.6.0:
-Debugger: Several new features and improvements (thanks for all the feedback!)
-Audio: Much lower sound latency, option to reduce popping sounds on DMC channel, panning and crossfeed options
-Video: Added a few video options (disable sprites/bg, force sprites/bg in first 8pixels on the left) and more preset palettes
-Added support for 41 UNIF boards
-Added support for 10 iNES mappers (about ~216 mappers supported)
-A few bug fixes and other small changes/additions
Quote:
(...)force sprites/bg in first 8pixels on the left(...)
Wait, what do you mean? AFAIK, the emulator can ignore the $2001:$06 setting. A few games look OK when there's no scrolling (Mega Man VI), but others have a messed up column. You meant such column is... fixable!?
Zepper wrote:
Quote:
(...)force sprites/bg in first 8pixels on the left(...)
Wait, what do you mean? AFAIK, the emulator can ignore the $2001:$06 setting. A few games look OK when there's no scrolling (Mega Man VI), but others have a messed up column. You meant such column is... fixable!?
All it does is what you first guessed: it ignores bits 1 & 2 of $2001 and forces the display of the BG & sprite layers.
So it does look bad in most cases, but it was something that a user requested, if I remember correctly, so I added the option.
Thank you for channel panning, this is great feature.
I'll file an issue for this in GitHub, but: what layer/framework are you using for things like joypad input? I can't seem to find any kind of definitive information on this. All I can tell you is that no input from my Playstation/Playstation 2-to-USB adapter (and I have several, but the one I use has the best overall support/reliability) is detected when trying to define buttons/inputs. Said adapter works fine with DirectInput (read: Nestopia, Steam games, etc. -- barring games which exclusively use XInput (shame on them)), hence my question.
koitsu wrote:
I'll file an issue for this in GitHub, but: what layer/framework are you using for things like joypad input? I can't seem to find any kind of definitive information on this. All I can tell you is that no input from my Playstation/Playstation 2-to-USB adapter (and I have several, but the one I use has the best overall support/reliability) is detected when trying to define buttons/inputs. Said adapter works fine with DirectInput (read: Nestopia, Steam games, etc. -- barring games which exclusively use XInput (shame on them)), hence my question.
That's odd - Mesen uses both XInput & DirectInput. It uses XInput for devices that support it (because it is easier to use and more reliable from a code perspective), and falls back to DirectInput when the device doesn't support XInput.
At the very least, it works for XBox 360/One (XInput) and PS4 controllers (DirectInput). But unfortunately that's all I have to test with, I'm pretty sure my old Gravis Gamepad has been thrown in the trash long long ago :)
Sour wrote:
That's odd - Mesen uses both XInput & DirectInput. It uses XInput for devices that support it (because it is easier to use and more reliable from a code perspective), and falls back to DirectInput when the device doesn't support XInput.
At the very least, it works for XBox 360/One (XInput) and PS4 controllers (DirectInput). But unfortunately that's all I have to test with, I'm pretty sure my old Gravis Gamepad has been thrown in the trash long long ago :)
Is there anything I can do to help? Some kind of debug build I can run that might display enumerated DirectInput devices (to verify Mesen is seeing it at all), or a lower-level DirectInput dump for device input? Or maybe buy you one of these PS/PS2-to-USB adapters (I could even include a PS2 joypad if you need one)?
Edit: I figured it out. Mesen makes the blind assumption that the system only has 1 single joypad/game input device. If you have more than one, and "the default" (or, more likely, whatever the first entry in enumeration returns) chosen is not what you actually have hooked up/are using, then yeah, no input is detected. Let me rephrase:
The
PS/PS2-to-USB adapter I use has 2 PS/PS2 joypad ports on it, i.e. it can be used for up to 2 players. So, in Windows, what shows up (in Device Manager) is 2 HID-compliant game controllers (one for each port). I had my PS2 joypad hooked up to the 2nd port ("2nd device"). Mesen only cared about the 1st device. This is why Nestopia (as a comparative example) under Options -> Input gives you a pulldown of joysticks/devices you can pick from (with the above adapter, you get 2 devices named "Twin USB Joystick" (one per port)). I simply moved the joypad to the other port and instantly Mesen was able to detect the input.
I've run into this problem in other games (commercial ones on Steam), many which don't give you the ability to pick which device/thing you want as your joypad. Some using SDL end up picking the 2nd device/port (?!?!?) by default, while other titles would pick the 1st. I think the enumeration order just happens to vary based on several factors, which is why being able to get a list of devices + select the one you want is important. Not picking on Mesen at all in the least, but it's almost like these programmers are blindly assuming you only have one joypad or HID-compliant device. Bad assumption!
koitsu wrote:
it's almost like these programmers are blindly assuming you only have one joypad or HID-compliant device. Bad assumption!
And they get away with making that assumption because the producer is telling the programmers to prioritize online multiplayer over split- or otherwise shared-screen multiplayer, in an attempt to sell two to four copies of a single game to a single household.
koitsu wrote:
Edit: I figured it out. Mesen makes the blind assumption that the system only has 1 single joypad/game input device. If you have more than one, and "the default" (or, more likely, whatever the first entry in enumeration returns) chosen is not what you actually have hooked up/are using, then yeah, no input is detected.
That's odd, this definitely works on my end, and I know the code is meant to support this.
I just retested to be 100% sure - I can connect and use a PS4 controller (directinput) + a 8bitdo snes gamepad (directinput) + a xbox 360 controller (xinput) all at once without any issues.
There must be more to this than just picking the first device. Maybe the way the devices are returned is slightly different due to both ports being on the same physical device?
Edit: Taking a look at the code I see 2 possibilities mostly:
A) both ports share the same GUID identifier (this sounds like it might be fairly likely - although it would be odd for a GUID to not be unique...). I have code that prevents Mesen from attempting to reconfigure GUIDs that it already knows about (to speed things up)
B) one of the ports isn't classified as a "DI8DEVCLASS_GAMECTRL" (which leaves KEYBOARD, POINTER, or DEVICE as the only alternatives - seems a bit unlikely)
Sour wrote:
That's odd, this definitely works on my end, and I know the code is meant to support this.
I just retested to be 100% sure - I can connect and use a PS4 controller (directinput) + a 8bitdo snes gamepad (directinput) + a xbox 360 controller (xinput) all at once without any issues.
There must be more to this than just picking the first device. Maybe the way the devices are returned is slightly different due to both ports being on the same physical device?
That would be my suspicion. The results of device enumeration, as I said, varies based on several factors (you've got how the USB stack detected things (this can and does vary), how whatever framework used to access said USB device defines the order, then the DirectInput layer on top of that, then "how" that makes it into the PL (e.g. the order in a linked list might be consistent, but in a hash it might not be), etc...).
As mentioned, if you'd like, I can send you one of these devices (I'll have to figure out how to ship something to Canada without paying an arm and a leg, as standard international mail would require me to go to the PO and stand in line for 30 minutes (no joke)) and you can play with it yourself. There are also 4-port versions, and 1-port versions that actually advertise 2 ports/devices on the USB bus (I have a couple of these from different vendors too. Can't make this crap up -- the IC obviously supports 2 ports but the manufacturer/vendor didn't disable the other port in firmware or otherwise). So you can see why there's a need to be able to select the actual device ("port") associated with the input... :-)
If this complicates the code and detection of such a thing (for a menu/selection pulldown) is needed, I can get you USB descriptor data for several different devices I have access to.
I'm pretty sure this is just a silly mistake on my part.
In the device's info, I used "guidProduct", but there is also "guidInstance".
https://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.reference.dideviceinstance(v=vs.85).aspxI'm fairly confident using guidInstance will fix this.
I'll make a quick build w/ guidInstance instead and post it here in a few minutes.
I forget: Are USB "composite devices" allowed to have two devices of the same class?
test build linkLet me know if that fixes it.
tepples wrote:
I forget: Are USB "composite devices" allowed to have two devices of the same class?
Not much of an expert on the hardware side of things, so I don't really know.
tepples wrote:
I forget: Are USB "composite devices" allowed to have two devices of the same class?
If I understand the question, then yes: you can specify the number of interfaces in the configuration descriptor. Keyboards and mice tend to do this.
Sour wrote:
test build linkLet me know if that fixes it.
Executable doesn't run: missing MesenCore.dll.
My bad - that's what I get for making a build right after making a lot of changes to add Linux support in the past 2 days.
Fixed the link, this build should actually run!
Yup, confirmed -- that fixes it. :-) I tested with two joypads connected to the same device, ensuring it differentiates between both. One more bug fixed, woot!
Great! Thanks for reporting this and helping me debug it!
What do you think about new advanced integer-math
NTSC algorhitm?Blargg's NTSC filter is good, but bisqwit's filter have some advantages over it, including:
- moire is more accurate
- notch is more accurate
- integer math gives good speed
-
and main: PAL-Filter can be easily done over this algorithm by slightly modifing it.
Feos tried to implement own PAL-filter to FCEUX, but failed because it was non-integer and have poor performance.
Bisqwit's algorhitm have great potential. Will be nice to see it in Mesen.
I have no idea if this is the right place for ask for features, so I apologize in advance for any annoyance.
Would it be possible to add a feature to the debugger where you could get the rate of occurence of each opcode? Similar to the CDL, but record the number of times all instructions are executed. Vsauce's zipf theory youtube video kinda got me thinking, but I think it would be valuable information in profiling how games use opcodes, their occurrence rate, and deviations between 65x variants (other 65x CPUs).
I'm not sure the best way store or display this data, though. Overall/absolute? Per second instances, with an average across 60 frames? Maybe even have a paired or triplet option; the occurrence of an opcide followed by one or two other opcodes (since the opcodes on the 65x tend to be simplistic, there are probably regular paired and grouped pattern occurrences; I'm interested in the ones other than the most common expected clc+opcode, etc).
tomaitheous wrote:
Would it be possible to add a feature to the debugger where you could get the rate of occurence of each opcode? Similar to the CDL, but record the number of times all instructions are executed. Vsauce's zipf theory youtube video kinda got me thinking, but I think it would be valuable information in profiling how games use opcodes, their occurrence rate, and deviations between 65x variants (other 65x CPUs).
I'm really not sure how this information is going to be useful without extremely deep or complex analysis (often having to be done by a human anyway). All you're going to be able to get at a basic level is a) what opcode "class" is being utilised (load accum vs. load X vs. store accum vs. no-op vs. branch), or b) what specific opcodes (which would include addressing mode, ex. A9 (LDA imm) vs. A4 (LDY zp) are being called at what amount/rate).
How would this information actually help you in any practical manner? "I think it would be valuable information"
begs the question.
As for "deviations between 65x variants", this isn't relevant at all unless I'm missing something obvious? There is only one 6502 in the NES and its counterparts; there aren't going to be "deterministic through profiling" variances between, say, a Dendy CPU and a NES CPU. If this was a multi-platform emulator, then I could see what you're getting at (distantly, bordering on "not really but kinda"), but it's not.
If you're interested in this type of extremely low-level performance profiling (in general), I strongly recommend you look into hardware performance counters on Intel x86/x64 CPUs. I have no familiarity with it, but
here you go (
and this too).
If there was a feature desired in a NES debugger relating to performance/profiling, I would have to say that being able to represent "amounts of time" (CPU cycles) spent in something (could be a subroutine, could be between address X and Y), as part of either VBlank or non-VBlank. How this is done now is a bit painstaking;
rainwarrior mentions it this Lizard Kickstarter update/blog (tweaking the R/G/B intensity bits based on start/end of routines, but there's also Lua if you're into that). If you ask me, I'd rather this be done through actual opcodes in a program (i.e.
BRK $40 might start the tracking of cycles for performance counter #0 (say, support up to 16 of them),
BRK $80 might turn it off, and then correlate that visually on-screen in some way -- allowing the programmer to move the BRK statements around to "narrow down" what takes up the most time in their routine). Again, this would be super useful for VBlank.
Statistics about byte frequency are good enough to identify whether a given chunk of data is x86 code, or 6502 code, or zlib-compressed, or just encrypted. (
q.v.)
I think it would be perfectly plausible that statistics about opcodes, and/or opcode bigrams, could distinguish between different code generators and/or different tasks.
lidnariq wrote:
Statistics about byte frequency are good enough to identify whether a given chunk of data is x86 code, or 6502 code, or zlib-compressed, or just encrypted.
The previous post, if I understand correctly, was talking not about data (i.e. addresses or data being accessed by a program running on the CPU), but rather the regularity/count of
opcodes themselves. I don't see how this is practically helpful. How I see it: you now know within, say, 20000 cycles, 9730 of them were spent doing
lda, 7000 were spent doing
sta, and 3000 were spent in (general classification) branch statements, with random leftovers. What does this tell you? To me it says nothing and teaches me nothing.
As for data access and heuristic analysis from that: different subject, where I can see the usefulness. I have absolutely no idea how such heuristics would work (especially being able to somehow figure out the difference between x86 and 6502 -- but more importantly I don't see how that's relevant inside of a NES emulator? The code running is all 6502, but I digress). I also don't know how something would be able to determine whether or not something was zlib-compressed or encrypted based upon "how often" (?) some data was being accessed. Some kind of "pattern" analysis, yes, I can see that being relevant I suppose, but it seems like a massive CPU-churning time sink (hopefully such a thing would default to off). This is where things are WAY outside my pay grade (one of several trade-offs of not going to college/uni I suppose), which is why I'm not going into too much detail. Do we really have encryption going on in NES games? Let's be practical please. Compression, absolutely. Unique data structures (for pretty much anything), absolutely. Encryption? Come on.
This is a feature/subject that's like opening Pandora's Box. It's guaranteed to never be enough for what people want. A programmer's nightmare.
Eugene.S wrote:
What do you think about new advanced integer-math
NTSC algorhitm?I'm currently busy working on the Linux port, but I'll try implementing this after.
Mesen runs video filters in another thread, so it should still get decent performance whether the filter is fast or somewhat slow (so long as the algorithm can convert 60 frames per second on a single core)
tomaitheous wrote:
...
Isn't most/all of this achievable by parsing a trace log file? It contains all the information you would need to get those statistics. I'd imagine writing a script in python or anything else to parse the output would be fairly easy. Of course, if you wanted to analyze the execution over a long period of time, the trace log could become large relatively quickly, though.
Sour wrote:
Isn't most/all of this achievable by parsing a trace log file? It contains all the information you would need to get those statistics. I'd imagine writing a script in python or anything else to parse the output would be fairly easy. Of course, if you wanted to analyze the execution over a long period of time, the trace log could become large relatively quickly, though.
That's what I'd advocate doing too: parsing the trace log.
And yes, trace log analysis is pretty much what most of us assembly hackers (rephrased: "the romhacker guys who do assembly and are thus highly sought after") end up sifting through. It's par for the course when it comes to reverse-engineering something. On several projects, I've sifted through a few hundred megabytes of trace logs to work out details, often on paper. It's how I was able to decode some of the compressed graphics/tiles in Otogirisou (SFC game) (fucking ChunSoft, my god, code from the bowels of hell -- meaning it's complex/hard for me to understand). I then write down details (incl. code) on pieces of paper and work out the details (esp. if graphical) on graph paper, as well as alongside a .txt file (more common with NES/FC titles; SNES/SFC are often too complex for this (matter of style)). Some examples attached (of a 10 or 12 page pamphlet of my notes)
My point being: yeah, trace logs. :-)
koitsu wrote:
9730 of them were spent doing lda, 7000 were spent doing sta, and 3000 were spent in (general classification) branch statements, with random leftovers. What does this tell you? To me it says nothing and teaches me nothing.
Because you're entirely missing the point.
It's not that there were N LDAs, P STAs, and "random leftovers". It's that
this time, there were X%
more LDAs, Y%
fewer STAs, and the exact shape of the "random leftovers" tell you whether what's happening is the same or not. So, yes, this should be visualized, not tabular.
Think of it as being identical to spectroscopy (mass spec or IR spec, either). You don't care that exactly X% of the light at 10µm was transmitted: at best that tells you concentration and at worst it tells you nothing. But you
do care that the total shape has a set of spikes that correspond to a C=O double bond.
Quote:
As for data access and heuristic analysis from that: different subject, where I can see the usefulness. I have absolutely no idea how such heuristics would work (especially being able to somehow figure out the difference between x86 and 6502 -- but more importantly I don't see how that's relevant inside of a NES emulator? The code running is all 6502, but I digress).
You got distracted. That was an example of a parallel: I can identify whether a chunk of binary is 6502 or x86 by looking at the relative frequency of bytes or of byte bigrams.
Because I have seen first-hand examples of how these statistics are useful for static analysis, it makes sense to me that they
could be useful for dynamic analysis.
Yeah, sorry, I still don't see the applicability (and I read your explanation 3 times). I'm caught somewhere between "I think I just got shat upon by someone with a college degree; am I reading someone's thesis?" and "Why would there be chunks of x86 binary in a NES game?"
I think where I said "this is where things are WAY outside my pay grade (one of several trade-offs of not going to college/uni I suppose)" applies greatly -- if it didn't before, it absolutely does now.
Edit: A colleague of mine has read this recent feature request/subject and can explain to me the usefulness/applicability of said feature. I'm getting an explanation now off-thread.
lidnariq wrote:
I can identify whether a chunk of binary is 6502 or x86 by looking at the relative frequency of bytes or of byte bigrams.
Could you, really? Wouldn't the noise of data and operand-data completely drown the opcode signal?
Like, the request seems to want to identify just the opcodes, which are already being sorted out by the emulator.
koitsu wrote:
How this is done now is a bit painstaking;
rainwarrior mentions it this Lizard Kickstarter update/blog (tweaking the R/G/B intensity bits based on start/end of routines, but there's also Lua if you're into that). If you ask me, I'd rather this be done through actual opcodes in a program (i.e.
BRK $40 might start the tracking of cycles for performance counter #0 (say, support up to 16 of them),
BRK $80 might turn it off, and then correlate that visually on-screen in some way -- allowing the programmer to move the BRK statements around to "narrow down" what takes up the most time in their routine). Again, this would be super useful for VBlank.
"STA $2001" is using an "actual opcode". ;P I don't really understand the advantage of BRK here (it saves 3 bytes but requires an IRQ handler?).
If you're trying to time within VBlank then you're not doing something you can see in the NES visual output anyway (though you could use an oscilloscope with $4011, $4016, etc. to get a signal across in an alternate way). I was demonstrating the use of $2001 there specifically because it makes visual output on the target hardware.
If you want to time code within an emulator there's a lot of ways to do it. You can use breakpoints. You can trigger LUA from execution points, write instructions (including $2001), or various other triggers, and then use it to gather/process/output your statistics. There's also thefox's
custom build of Nintendulator that adds profiling registers at $4020-$403F.
I'll get to trace logs in a moment, but I actually wrote a trace log parser for Lizard, and it's one of the things I "really" use when I want to solve a specific performance problem. The visual check through $2001 is just a rough thing that I can have on while playing through to identify problem spots. When I need more detailed info I use an emulator and other tools. (Breakpoints, trace logs, LUA, etc.)
Sour wrote:
Isn't most/all of this achievable by parsing a trace log file? It contains all the information you would need to get those statistics.
...
I'd imagine writing a script in python or anything else to parse the output would be fairly easy. Of course, if you wanted to analyze the execution over a long period of time, the trace log could become large relatively quickly, though.
Yes this is doable through trace logs, though the size of the output will become astronomical in fairly short period of time. It would be much more practical to do like a code data log and just have a 256 entry table that increments the current opcode whenever an instruction executed. This should be easy to add to any open source emulator, but I don't really see at this point why it would be feature needed in any release version of an emulator.
Sour wrote:
tomaitheous wrote:
...
Isn't most/all of this achievable by parsing a trace log file? It contains all the information you would need to get those statistics. I'd imagine writing a script in python or anything else to parse the output would be fairly easy. Of course, if you wanted to analyze the execution over a long period of time, the trace log could become large relatively quickly, though.
If the trace log contains time stamp entries, then this would work. Though less efficient because of the gigantic bloat. Similar to a CDL, it would be less space to have a binary output of each 'frame' - 256 entries with a occurrence precision rate of 32bits -> 1024bytes per frame. Clear the results on the next frame, recount, save frame of data to binary file. For pairs and triplets, it'd be double or triple the size, which is still tiny in comparison to a trace log.
Anyway, the problem I'm having is that even for just a second of trace log, Mesen's "open" function instead of "saving" or automatically saving, appears to be leaving "notepad" completely unresponsive when it tries to open it (I have other utilities for better handing huge text files in the gigabyte range, associated with txt files in windows, but Mesen doesn't see to use the windows associated app). Unless it's not crashing, but just stalled for like 30 minutes.
I dunno. Maybe I can try messing with mednafen's source code.
This is a tangent, but let's see if I can wrangle a simple example for the static analysis:
Say I had a certain "
game" that was released as an Atari 2600 ROM embedded in an emulator that's shipped as a Windows binary. It hasn't been encrypted at all, I just don't know where it is. I also don't know the right questions to ask to find tools to take PE binaries apart.
I happen to have already written a program that will let me look at any given file on disk as a series of histograms: X axis is byte #, Y axis is Nth slice.
Because I know what x86 and 6502 code "looks" like in a histogram:
Attachment:
histograms-of-bytes-of-different-ISAs.png [ 4.54 KiB | Viewed 3572 times ]
I can use my brain as a crude (but fast) means of pattern recognition.
Using that, I can find the embedded 6502 code inside the x86 code.
And all of this is a tangent. You won't find x86 code in an NES game (unless it's there by accident because the assembler didn't clear its memory). That's not my point. My point is that these histograms let you find oblique things you didn't know. And they let you find things you didn't know you didn't know. (
q.v. #1,
q.v. #2)
rainwarrior wrote:
Could you [really identify whether a binary is 6502 or x86]? Wouldn't the noise of data and operand-data completely drown the opcode signal?
Well, I can tell the difference between the two in the above image.
lidnariq wrote:
rainwarrior wrote:
Could you [really identify whether a binary is 6502 or x86]? Wouldn't the noise of data and operand-data completely drown the opcode signal?
Well, I can tell the difference between the two in the above image.
Okay, I can buy that.
tomaitheous wrote:
Similar to a CDL, it would be less space to have a binary output of each 'frame' - 256 entries with a occurrence precision rate of 32bits -> 1024bytes per frame. Clear the results on the next frame, recount, save frame of data to binary file. For pairs and triplets, it'd be double or triple the size, which is still tiny in comparison to a trace log.
This should be relatively easy to implement and would require little to no UI, so I'll see what I can do.
tomaitheous wrote:
but Mesen doesn't see to use the windows associated app
The button correctly uses Notepad++ for me. The file's extension is "log" though, so you may have log files associated to notepad. Maybe I should rename the files to be .txt instead.
You can open the file manually too, it'll be in Documents\Mesen\Debugger. The open button is just a shortcut to get the file opened.
lidnariq wrote:
My point is that these histograms let you find oblique things you didn't know. And they let you find things you didn't know you didn't know.
^ This. I mean, I already have an idea of what I'm looking for (or suspect), but it's also about
didn't know you didn't know.
koitsu: What I'm looking for, goes way beyond the NES. The NES just happens to be something I'm looking at in relation to other system, setups, environments, and processors. *If* I find something interesting, I want to be able to turn this into an independent research project for my undergrad thesis (yeah.. undergrad thesis. My uni offers it for senior years for honors program. My plans are grad school, so early research opportunities with faculty mentors looks good on resume. Plus, it might make for an interesting read - just the nes and certain related facts/patterns/etc).
Foremost: okay, a colleague has explained to me what the actual usability of (a kind of) a spectrogram for code/data would be. I think for me, no, I wouldn't find this helpful, but I can see (distantly -- because again, outside my pay grade) how it'd be useful. However, I would imagine something like this could be incredibly useful for people who are doing something like, say, ripping NSFs. It's a bit of a stretch, but that's what I got out of it anyway.
As for this:
rainwarrior wrote:
koitsu wrote:
How this is done now is a bit painstaking;
rainwarrior mentions it this Lizard Kickstarter update/blog (tweaking the R/G/B intensity bits based on start/end of routines, but there's also Lua if you're into that). If you ask me, I'd rather this be done through actual opcodes in a program (i.e.
BRK $40 might start the tracking of cycles for performance counter #0 (say, support up to 16 of them),
BRK $80 might turn it off, and then correlate that visually on-screen in some way -- allowing the programmer to move the BRK statements around to "narrow down" what takes up the most time in their routine). Again, this would be super useful for VBlank.
"STA $2001" is using an "actual opcode". ;P I don't really understand the advantage of BRK here (it saves 3 bytes but requires an IRQ handler?).
I'll start a separate thread, because the explanation I'll give is too long-winded and I'd feel like was hijacking existing subject.
Edit: said thread:
viewtopic.php?f=3&t=15254
Trying to identify code by binary visualization isn't the request being made, though, just a digression on something that is somewhat similar and useful.
I don't think you could use an opcode execution profile (at least not what seems to be requested) to identify 6502 code from other types of code, really. Like, running x86 code on a 6502 will immediately engage in degenerate behaviour anyway; even 6502 code will probably "jump to junk" pretty quick unless you know where it's supposed to be run from too. There's probably a lot of better ways to identify it (lidnariq's binary histogram example seems like one way already).
I'm reminded of the computer studies of Shakespeare and contemporary authors where they used word frequencies and markov chains to identify authorial "signatures" to attribute works to their authors (and also resolve questions about "disputed" works of Shakespeare). I'm sure there's similar applications for code analysis and identification of coding styles, etc. (especially on work from the hand-coded assembly era), though you probably want a lot better information than just opcode frequency here.
I don't think "we don't know what we don't know" is justification to add a feature to a debugger, though. (e.g. every single feature of MSVC's debugger has a pretty deliberate purpose.) I think gathering information about opcodes is an interesting research question-- but it would be better served by a tool that can be modified by the researcher directly for their own purposes, i.e. a one off build, or something they modified themselves, or something scriptable like FCEUX's lua scripts (which can probably do this task, actually).
I guarantee once you've collected some data you're going to have a hundred "what if" extensions for that feature (what if I tracked pairs of consecutive opcodes, what about detecting hardware wait loops, etc.). You'd get the best results if you were prepared to modify it yourself for a personal build.
rainwarrior wrote:
I guarantee once you've collected some data you're going to have a hundred "what if" extensions for that feature (what if I tracked pairs of consecutive opcodes, what about detecting hardware wait loops, etc.). You'd get the best results if you were prepared to modify it yourself for a personal build.
I agree on this - this sort of very specific feature would be better off as a custom build or a script.
I do plan on adding C# scripting in the future - I should probably take a look at that soon, it should be relatively simple to implement.
tomaitheous wrote:
Would it be possible to add a feature to the debugger where you could get the rate of occurence of each opcode? Similar to the CDL, but record the number of times all instructions are executed.
rainwarrior wrote:
FCEUX's lua scripts ... can probably do this task, actually
Just to follow up, as a demonstration of how useful scripting languages can be for stuff like this, here's a very simple FCEUX LUA script that does that.
(Just the very basic, logs opcodes, spits out the data to the LUA console when stopped. You could easily modify it to respond to input, log other information, output to file, etc.)
Edit: Lua scripts were later
disallowed on this forum. Uploading a ZIP containing what I think was the original script.
Thanks rainwarrior! I had no idea you could do that low level of stuff with lua script. I always though lua was integrated as mostly a just a generic memory interface (and on a frame basis). Guess I've been missing out.
Stereo panning is a great feature and I'm happy someone finally implemented it in an emulator! I hope you'll add per-channel panning options for the expansion audio chips. I also hope you'll add shader support in the future.
I have a sound emulation bug to report that I hope you can get to the bottom of. I don't know how widely known this is, but in Super Mario Bros. 3, the underground music is supposed to have a randomized phase cancellation effect. Mesen doesn't reproduce it. I made a recording from my NES to show this does happen on hardware, but the forum won't let me attach a WAV or FLAC file, so I
uploaded it to MediaFire. I haven't tested a huge array of emulators, but FCEUX does it pretty well (Nestopia and Nintendulator do it too but seems like it's to a lesser extent). Conveniently, I hear FCEUX has a great debugger that presumably would be able to show what's going on, but I'm not a programmer in any way. To try to figure it out, I paused FCEUX (not the game), saved state, and recorded the two square waves and the triangle wave individually, pausing and reloading the state each time so the recordings start at the exact same time and are perfectly synchronized. Then I compared the waves of two different-sounding passages of the same notes side-by-side. The timing of the waves in relation to each other gets delayed slightly, producing phase cancellations. I haven't done the same kind of recording test with any other emulators, but my guess based on the sound of it is that other emulators don't randomize it and just produce one of the many possible variations. Hopefully you can get it figured out!
nothingtosay wrote:
The timing of the waves in relation to each other gets delayed slightly, producing phase cancellations. I haven't done the same kind of recording test with any other emulators, but my guess based on the sound of it is that other emulators don't randomize it and just produce one of the many possible variations. Hopefully you can get it figured out!
Phases aren't really random. The squares explicitly have their phase reset every time $4003/$4007 is written, which will happen at specific times during a piece of music (usually whenever a particular octave threshold is crossed, but some music engines reset phase on every note). Sound effects that interrupt the channels will disrupt it though (until the next phase reset). Emulating the phase correctly requires resetting the phase on $4003/4007 but also making sure that the "silent" behaviour also matches what the hardware does.
The triangle's phase is more effectively random; I think it's consistent at power on, but that's about it. Same deal with noise. However, this isn't randomization done by the emulator, this is randomization as a consequence of human input.
I see Game_Music_Emu's Nes_Apu and Nes_Oscs already do this, or close enough. They reset phase on writes to $4003/$4007, and they maintain pitch correct phase during periods of silence or inaudibility. I just don't know what else could be "wrong".
nothingtosay wrote:
Stereo panning is a great feature and I'm happy someone finally implemented it in an emulator! I hope you'll add per-channel panning options for the expansion audio chips. I also hope you'll add shader support in the future.
No problem! It was a feature that was requested by Eugene.S a while ago. I might add per-channel panning for the expansion chips eventually, but that would imply per-channel volume options for them too, and then you'd end up with twice as many sliders as there already are (and there's already too many for comfort!) - so I'm not sure. If I did, I'd have to hide them away in some advanced menu (most users wouldn't ever use this).
Shaders are on my list of things to get done eventually, though.
nothingtosay wrote:
[SMB3 BGM explanation]
To be perfectly honest, I can't really tell from the recording (I'm pretty terrible at these sort of things). Also, when reloading a save state, I would expect an emulator to play the exact same sound sequence every single time, so long as no user input is done (with a controller) - if some emulators don't, then that would imply their save states aren't perfect. (But I may be misunderstanding what you meant about reloading from a save state to record & compare)
Mesen passes all of blargg's audio tests, including the apu_mixer ones that mute (or nearly mute) a channel by using another channel's output (which requires pretty high accuracy). FCEUX, Nintendulator and Nestopia (apparently) all fail these tests according to TASVideos.org's list.
... Or so I thought :) It looks like I inadvertently broke the apu_mixer tests with the panning options in 0.6.0. 0.5.3 works properly, and I just
committed a fix for this. (Really need to add sound checking capabilities to my automated tests).
Also, if you use the volume/panning options, the sound emulation is automatically less accurate - you need to keep the volume of individual sound channels at 100%, and no panning (all at 0) for the apu_mixer tests to pass as expected. (Master volume has no impact on the accuracy, though)
If you want, you can give SMB3 a shot with 0.5.3 (you can download it
here) and see if it fares any better.
Sour wrote:
I might add per-channel panning for the expansion chips eventually, but that would imply per-channel volume options for them too, and then you'd end up with twice as many sliders as there already are (and there's already too many for comfort!) - so I'm not sure. If I did, I'd have to hide them away in some advanced menu (most users wouldn't ever use this).
You could do like the NSF player NotSoFatso and have tabs for each expansion chip where the sliders are located. It's a little tough to get a good stereo mix on games that just use the regular channels, but expansion chips are where the panning option could really shine so I hope you'll add it. Not meaning to put pressure on you, of course.
Sour wrote:
To be perfectly honest, I can't really tell from the recording (I'm pretty terrible at these sort of things).
Notice how the notes stay the same but the tone changes between the first "doo-doo, doo-doo, doo-doo" and the next in my recording? That's due to the waves being similar but out of phase slightly, so some of the frequencies cancel out when the the waves are added together. The game varies the phase of the waves periodically, so different frequencies will cancel out, creating different tones. In Mesen the tone stays the same, so the phase apparently isn't getting changed. In emulators that don't recreate the effect, it sounds to me like they only play one of the possible phase variations but never change to another after that.
Sour wrote:
Also, when reloading a save state, I would expect an emulator to play the exact same sound sequence every single time, so long as no user input is done (with a controller) - if some emulators don't, then that would imply their save states aren't perfect. (But I may be misunderstanding what you meant about reloading from a save state to record & compare)
I only used the save states for recording from FCEUX so I could get each channel individually for the exact same play-through of the music, not for testing the sound of any of the emulators. For that, I just went into the pipe in the sky in the first level and listened to see if the tone of the music changed over time.
Sour wrote:
Mesen passes all of blargg's audio tests, including the apu_mixer ones that mute (or nearly mute) a channel by using another channel's output (which requires pretty high accuracy). FCEUX, Nintendulator and Nestopia (apparently) all fail these tests according to TASVideos.org's list.
Right. It just must be a function the test ROMs don't cover. Hopefully someone might make a test ROM that tests it and in the process promote awareness of and support for this cool little effect.
Sour wrote:
Also, if you use the volume/panning options, the sound emulation is automatically less accurate - you need to keep the volume of individual sound channels at 100%, and no panning (all at 0) for the apu_mixer tests to pass as expected. (Master volume has no impact on the accuracy, though)
In my testing, I kept all the channels at full volume and with no stereo panning. Anything else (besides master volume) would assuredly lessen the effect if it were emulated.
Sour wrote:
If you want, you can give SMB3 a shot with 0.5.3 (you can download it
here) and see if it fares any better.
I checked and nope, it doesn't work there either, sorry to say.
Thanks for your consideration for all of this, Sour.
Also, to rainwarrior, I don't really have anything I can say in reply to what you wrote, but I appreciate that you did it and want to acknowledge that. Here's hoping this all gets worked out and both Mesen and Game_Music_Emu will get it emulated properly.
There are two components to square wave phase: the state of the period divider (by 8+1 through 2047+1), and the state of the 8-step counter. As I understand it, a note-on on the Game Boy resets only the period divider, but a note-on ($4003/$4007 write) on the NES resets only the state of the 8-step counter. And you can't force a full reset of the period divider by setting the period to zero because the length counter logic treats periods less than 8+1 as note-off. Am I right?
nothingtosay wrote:
Notice how the notes stay the same but the tone changes between the first "doo-doo, doo-doo, doo-doo" and the next in my recording?
I
think I've managed to fix this - There were a couple of things wrong with the square channels' code.
I made a test build (
download) with the fix, and I'm pretty sure I can hear different variations of the music loop now.
If you could test it out on your end and let me know if it seems to be fixed, that'd be great.
It's closer, but it's not there yet. It changes tone but only over a much longer time than the hardware does on average. Where the hardware sometimes alters the tone between musical phrases, Mesen sometimes keeps the same tone for the duration of an entire loop of the music for me (although in the recording I uploaded, the second half stays pretty consistent in tone, whereas the first half changes frequently. It went back to changing a lot on the third and fourth loops, which I edited out for length). FCEUX is closer, but actually not as good as I remembered when I wrote my initial post. It's still seemingly too slow about changing and the changes aren't nearly so tonally drastic. Bizhawk sounds to me like it covers a wider range of tone changes, but is still really slow about it.
Does anyone know of an emulator that does the effect like a real NES does? I tested puNES, Nestopia, FCEUX, Nintendulator, and Bizhawk. Maybe rainwarrior, tepples, or Quietust have some further ideas?
nothingtosay wrote:
Does anyone know of an emulator that does the effect like a real NES does?
Does
my NSF player get it right?
Sorry for the plug, but you did ask, and I am aiming for OCD-level accuracy, so anything you have to say about it would interest me.
Sorry, but no. Yours has some phase cancellation but it doesn't change over time.
nothingtosay wrote:
Sorry, but no. Yours has some phase cancellation but it doesn't change over time.
The relative square phase should
not change over time when playing the NSF. The music writes both high bytes during the loop, and will leave the phase in a consistent position each time through the loop. (Triangle, on the other hand, will not be consistent, but its phase makes a relatively minor difference in the sound.)
Perhaps the phases of the square waves don't change individually, but do their timings relative to each other get changed at all? That could cause phase cancellations. I really don't have any education on or understanding of the details of how the NES works; I'm an amateur audio enthusiast so I'm limited to what my knowledge and analysis based on that can tell me, so I may be wrong about any part of my theories. But whatever is causing it, the effect is there on my NES and present to varying degrees in some emulators. It's also present in some officially released recordings of the Mario 3 soundtrack, but not all, mysteriously. Maybe those albums that don't have it used emulation or there's some other explanation. The best example I can find on an album is from the 30th anniversary Super Mario Bros. album released last year.
I uploaded a sample because the second loop goes through such big tonal shifts in such a short time. Hopefully that helps demonstrate it's not me imagining things or something wrong with my NES. No emulator I've tested has produced the variety of sounds that my recording and that sample show is possible.
By the way, I forgot to mention that I also tested the Wii U and 3DS Virtual Console releases to see if they get it right. It'll probably be no shock to anyone here that they don't.
rainwarrior wrote:
The relative square phase should not change over time when playing the NSF. The music writes both high bytes during the loop, and will leave the phase in a consistent position each time through the loop. (Triangle, on the other hand, will not be consistent, but its phase makes a relatively minor difference in the sound.)
It'll be the same for every loop, since NSFs are deterministic (unlike actual game ROMs), but it should still vary within each loop. Is that what you meant, nothingtosay?
I actually wasn't really thinking about that before, but I would have thought you're correct, Rahsennor. However, I put it to the test and played the NSF in Nintendulator and Nestopia, and, to my surprise, in both emulators the sound will actually keep changing from loop to loop, never the same way twice, within reasonable listening lengths, anyway. Here's another interesting thing: I recorded two loops in Nintendulator, stopped it playing, then recorded another two loops (Nintendulator appears to have no WAV writing function, so I had to use an outside capturing program). I closed the emulator, opened it again, and recorded a further two loops. When I compared them, each instance had the same tonal shifts in the same places. Starting play, switching to another track without stopping, and then switching back still produces the same results as my other three recordings. But, with Nestopia, every play is different. I take it that means Nintendulator resets something before initiating every play, but Nestopia doesn't. Because Nestopia immediately starts playing when I load the NSF, I can't try playing the underground theme before anything else ever plays to see if it would still be different each time if playing it were the first thing the emulator did. Evidently what causes the effect is also something that's not wholly internal to the NSF as well, or, as Rahsennor says, it would presumably play the same way for every loop, no matter the player.
Anybody else can try this stuff too to see if they have the same results. Surely I'm not the only one who hears it, right?
In-game, the timing of the play routine, particularly with respect to whether a write hits before or after the period divider resets, may depend on the CPU load of the game logic. Sound effects also end up changing the channels' relative phase
Looking more closely at it:
1. Super Mario Bros. 3 writes $4007 and $4003 on
every note, not just every loop.
2. The timing between this pair of writes varies by about 100 cycles (though with the same variation every loop). This might account for about ~2.5% difference in phase on the highest frequency notes used in the track. (Probably not the culprit, not strong enough.)
3.
tepples suggested here that $4003/$4007 does not reset the state of the clock divider. This could account for much greater difference. If the pitch was the same, it could be up to 12.5% (1/8) off, but if the previous pitch was lower it could cause even wider variation (every octave down doubles the width here). This could potentially vary between loops.
4. Said before, but the triangle is free running and will have more or less random phase always, but I don't think it accounts for what you're hearing. (It's an effect, but not terribly strong compared to what the squares are doing.)
So, really if you're interested, I'd suggest digging into #3. The question I'd have for that is whether the clock divider eventually halts if it runs out while the channel is halted, or if it's always running, and thus every single note is subject to rather larger potential phase varation? The Visual 2A03 project might illuminate this. (Also, once determined, the information should be mentioned on the wiki, if it isn't already.)
This SMB3 track is an interesting edge case because it's using both channels identically; which is rather strange. Usually doublings have a a small difference in pitch for intentional chorus effect, or maybe a change of octave, but SMB3 is just doing the exact same thing on both.
You keep asking which emulators are accurate, which is not a fair question. Most likely nobody has tried to refine this very specific and subtle aspect of its behaviour against this very specific edge case. If you want to know the answer, make a test ROM that can expose the difference and test it on real hardware and emulators you'd like to know more about.
SMB3 itself is
not a sufficient test. As mentioned, in-game there are a lot of variables, but even the NSF, which should be deterministic, can't have consistent timing between different hardware NSF players, or software NSF players. The NSF specification is not strong enough to specify exactly when the play routine should start like that. (You mentioned a test case by Blargg, but I'm not certain if it's supposed to be a test of this specific thing either.) You can't just record SMB3 from hardware and expect "accurate" emulators to sound identical, that in itself would probably be an unfair and incorrect test (similar to that
strange pattern of memory initialization people used to use because it was measured that way on one particular NES). Even the idea that it should "change over time" is not necessarily correct for all reasonable timings-- things like this can often end up falling on some coincidental integer division of the timing loop, the change might be accidentally due to something else, etc.
Fixed a really goofy bug in my APU emulation, and now it sounds like this. Better?
rainwarrior wrote:
3.
tepples suggested here that $4003/$4007 does not reset the state of the clock divider. This could account for much greater difference. If the pitch was the same, it could be up to 12.5% (1/8) off, but if the previous pitch was lower it could cause even wider variation (every octave down doubles the width here). This could potentially vary between loops.
This is definitely part of the solution - Mesen was resetting the clock divider on 4003/4007 writes, which is what caused SMB3 to always sound the exact same. Beyond this, I'm not quite sure what the problem could be (and it's really far outside my field of expertise!). I'm more than happy to implement a solution if someone figures out the cause, though.
I ran into issues with blargg's dmc-rates test and saw that subtracting 1 from the rate lookup table worked, but I don't understand why.
I'm putting this here because I see you did the same but with an explanation (
DeltaModulationChannel.cpp - line 146), how does the real thing work?
Does the timer always decrement, even immediately after wrapping around? (I'd assume this goes for every other timer too, not just the dmc.)
Code:
if(timer == 0)
{
timer = period
//stuff
}
timer--
Like this?
rainwarrior wrote:
You keep asking which emulators are accurate, which is not a fair question. Most likely nobody has tried to refine this very specific and subtle aspect of its behaviour against this very specific edge case. If you want to know the answer, make a test ROM that can expose the difference and test it on real hardware and emulators you'd like to know more about.
I didn't really ask which emulators are accurate overall, I asked if anyone knew of any that accurately reproduce the effect. I know that overall Nintendulator is less accurate than Mesen, at least according to the existing test ROMs, but Nintendulator handles this particular edge case better. And like I said, I don't know the first thing about programming so I have no capability of making a test ROM for this. If I were able, it's definitely the kind of project I'd devote some time to.
rainwarrior wrote:
(You mentioned a test case by Blargg, but I'm not certain if it's supposed to be a test of this specific thing either.)
I assume you were talking to me in this paragraph, but Sour was the one who brought up the APU test ROM. I replied that if Mesen passes it, it just must not test what's necessary for the Mario 3 underground music effect.
rainwarrior wrote:
You can't just record SMB3 from hardware and expect "accurate" emulators to sound identical, that in itself would probably be an unfair and incorrect test (similar to that
strange pattern of memory initialization people used to use because it was measured that way on one particular NES). Even the idea that it should "change over time" is not necessarily correct for all reasonable timings-- things like this can often end up falling on some coincidental integer division of the timing loop, the change might be accidentally due to something else, etc.
I know there's no single correct sound. My own recording's first loop has several shifts in tone and then the second loop has little, so I know it doesn't necessarily change much all the time. But when an emulator's sound doesn't change much or at all over several loops, it's probably that the emulator is missing something.
Since I'm not a programmer and I'm entirely at the mercy of people who are to figure out and implement things I want in emulators, thank you for checking further into how Mario 3 works.
Rahsennor wrote:
Fixed a really goofy bug in my APU emulation, and now it sounds like this. Better?
Sounds pretty correct to me, I'm happy to report!
Please share what you did and any other information relevant to achieving this.
fred wrote:
I ran into issues with blargg's dmc-rates test and saw that subtracting 1 from the rate lookup table worked, but I don't understand why.
I'm putting this here because I see you did the same but with an explanation (
DeltaModulationChannel.cpp - line 146), how does the real thing work?
As far as I know, the answer is in the comment above it (quoted from the wiki) - "The rate determines for how many CPU cycles happen between changes in the output level during automatic delta-encoded sample playback.".
So a value of 100 means the output changes every 100 cpu cycles. For the other channels, a value of 100 means it would change every 101 cycles. Since the clock divider logic is shared between all channels, I subtracted 1 from the DMC init value to match.
nothingtosay wrote:
Rahsennor wrote:
Fixed a really goofy bug in my APU emulation, and now it sounds like this. Better?
Sounds pretty correct to me, I'm happy to report!
Please share what you did and any other information relevant to achieving this.
I'd be interested to know as well - unfortunately it looks like your code isn't open source (yet?)
Ohhhh. I see now! Thanks for the explanation!
nothingtosay wrote:
I know that overall Nintendulator is less accurate than Mesen, at least according to the existing test ROMs
Test ROMs only test for very specific things, though. If there's no test ROM for something yet, it's probably a relatively unknown behaviour still, and any emulator, even one that seems "more accurate" than others, is liable to have arbitrary/random decisions applied to that grey area.
For example,
NSFPlay seems to implement a free-running clock divider, but it implemented the 8-step sequence as a 16-step sequence instead of an additional division of the clock by 2, so it seems to be semi-correct, reducing the potential phase shift by half? When this was written there was no deliberate attempt here to get that particular behaviour correct, because it was unknown to the author. These were just decisions made to fill the "gaps" between the things that are known.
There's always more than one way to implement anything, and in terms of emulation "unknown" often means "doesn't appear to matter". You have to find a test case, ideally a test ROM, that can show you what the true behaviour should be.
Like, the 16-step sequence in NSFPlay is apparently wrong, but the original implementer did it that way, and it's taken many years to finally notice a case where it matters. Because I don't tend to change code that was already passing all of my own tests, I saw no reason to change it. Now I have a reason.
nothingtosay wrote:
Nintendulator handles this particular edge case better.
Not necessarily. My point was that variation of SMB3 over time could easily be caused accidentally by something else, and it's possible that there are stable timings where it
shouldn't change over time. This particular effect is very sensitive to a lot of things that SMB3 isn't any kind of control for.
Anyhow, just be wary of just looking at one particular example doing what you expect, and presuming this is correct. SMB3 was a good example for noticing this subtle effect exists, but it's not the proper tool to figure out exactly how to implement it correctly.
nothingtosay wrote:
rainwarrior wrote:
(You mentioned a test case by Blargg, but I'm not certain if it's supposed to be a test of this specific thing either.)
I assume you were talking to me in this paragraph, but Sour was the one who brought up the APU test ROM.
Sorry for the confusion, I'm probably just conflating the whole conversation into one big "You".
nothingtosay wrote:
I don't know the first thing about programming so I have no capability of making a test ROM for this. If I were able, it's definitely the kind of project I'd devote some time to.
I'll write my own test for it when I get back to NSFPlay, if someone else doesn't do it before then.
nothingtosay wrote:
Sounds pretty correct to me, I'm happy to report!
Please share what you did and any other information relevant to achieving this.
Sour wrote:
I'd be interested to know as well - unfortunately it looks like your code isn't open source (yet?)
Long story short, I was resetting the clock divider on writes to $4003 and $4007.
Short story long, that was a hack I added to pass the apu_mixer tests. Once I removed it, I eventually traced the real fault to a typo in the 8-step sequencer, postincrement instead of preincrement, meaning it was perpetually 45 degrees out of phase. And then I had to fix a trail of code that depended on the old functionality.
This is why my code isn't open source yet. I'd just be spreading the stupid.
I can post my APU code anyway if you want it, but it's not very readable.
nothingtosay wrote:
I don't know the first thing about programming so I have no capability of making a test ROM for this. If I were able, it's definitely the kind of project I'd devote some time to.
I have a few test ROMs in mind, for this and other stuff I haven't seen (good) tests for, but life has other plans for me at the moment.
Thank you, Sour.
Bisqwit's NTSC-decoder is implemented.
Rahsennor wrote:
Long story short, I was resetting the clock divider on writes to $4003 and $4007.
Yea, that's pretty much the issue I had and what I fixed as well.
Eugene.S wrote:
Thank you, Sour.
Bisqwit's NTSC-decoder is implemented.
This is now available in 0.7.0 along with AVI recording. Also, 0.6.1 was released last week with a few fixes and Linux support.
AVI dumper produces heavy av desync for me with both codecs (haven't tried uncompressed). The video looks sped up inconsistently.
Also, Sour, why didn't you make use of the constexpr functionality the same as it's used in Bisqwit's original method? That thing makes float calculations happen on compile time. Does moving them to happen only once per frame result in the same speed as if they were done via constexpr? And do you know if VS14 puts lambdas as inlines? Bisqwit said it matters.
Since you said both codecs, I'm guessing you compiled it yourself?
There is no desync on my end (just tried a ~45 secs video) - what are you using to play the AVI files? It is possible that the AVI header is not being generated with the right values and causing inconsistencies between players.
My implementation adds a bunch of filter settings (brightness, saturation, hue, etc.), so none of these values are actually known at compile time (so it's impossible to use constexpr). There was a small performance impact, but not that significant. They are only updated once per frame - which is not that much compared to rendering ~500k pixels at 8x res.
I'm pretty sure the lambdas are inlined - I cannot put breakpoints in them in release builds, which is usually caused by functions getting inlined by the compiler.
I recorded the video with the ntsc filter, and on my machine filter alone works at 20 fps, and with avi dumping it becomes even less. Try to make it lag and then record.
I tried x64 MPC on Win7, x86 MPC and VirtualDub on WinXP, they all give clear desync and/or video speed up for me.
Sour wrote:
My implementation adds a bunch of filter settings (brightness, saturation, hue, etc.), so none of these values are actually known at compile time (so it's impossible to use constexpr).
You could also update them only when the values are changed by the user, I did that in fceux.
The ntsc filter at 4x while recording dropped to around 20fps on my end, but still recorded fine.
But the ntsc filter at 8x dropped to 2fps and that seemed to cause some issues (only recorded like 2 seconds worth though) - I'll test this some more later, thanks for letting me know!
feos wrote:
You could also update them only when the values are changed by the user, I did that in fceux.
Yea, at the moment it recalculates them every frame - it's not that much of an overhead though. It's probably equivalent to < 0.1% of the time it takes to render the frame itself (at 8x at least). It's only this way because I forgot to implement that check, though - the code for the other NTSC filter does it, iirc.
What if Mesen outputs YUV signal directly instead of RGB? That will skip the decoding step and probably speed things up a bit. And in case you're using an index-based palette approach for the picture, you could generate YUV internally that way as well, with no need to encode initial RGB signal either.
feos wrote:
What if Mesen outputs YUV signal directly instead of RGB?
That's actually a pretty good idea, but it looks like YUV texture support in DirectX11 is not available in Windows 7? Trying to use DXGI_FORMAT_AYUV fails on my computer, and most YUV-related formats have this note in the documentation: "Direct3D 11.1: This value is not supported until Windows 8."
An alternative seems to be to use a shader to perform the conversion - but I am mostly clueless about shaders (which is why they're still not supported in Mesen)
Is dx11 a hard requirement? Also I dunno how it's set up in Mesen, but won't it work like, say, in video players? They all can handle YUV on any machine.
feos wrote:
Is dx11 a hard requirement? Also I dunno how it's set up in Mesen, but won't it work like, say, in video players? They all can handle YUV on any machine.
Replacing DX11 with something else would be a fair amount of work (especially since it would make no sense to just remove DX from the graphics code and keep using it for sound/gamepads). I'd rather not use SDL for the Windows version - I don't like having to ship an extra DLL with Mesen when I can easily use DX directly.
I did a tiny bit of research and noticed VLC has a "Use hardware YUV->RGB conversions" option - I tried to lookup some information about it, and ended up finding a bunch of threads by users having problems when the option is enabled.
LAV Video Decoder (which is what I apparently use for playback in MPC) on the other hand appears to be doing YUV->RGB translation in
softwareThe filter runs at 60fps in 8x res on my 1st gen i5 (overclocked to ~3.4ghz), so I'm not sure why it runs at 20fps for you.
Are you using an even older CPU? Or maybe you're on a dual core CPU? A dual core would probably have a hard time since there are 4 threads working at the same time when using the NTSC filter (nes emulation, ntsc filter top half, ntsc filter bottom half and rendering with DX)
Running faster than full speed on my machine is an indication of a really fast emulator core (or filter). My computer is from 2008 (it was the cheapest one in the store).
Sour wrote:
I did a tiny bit of research and noticed VLC has a "Use hardware YUV->RGB conversions" option - I tried to lookup some information about it, and ended up finding a bunch of threads by users having problems when the option is enabled.
Under *n*x, there's the "Xvideo" X11 extension, which provides access to hardware-accelerated colorspace transforms. I have no idea if there's a Windows variant of that, though.
Also, I didn't mean dropping DX11, but probably using lower versions?
feos wrote:
Also, I didn't mean dropping DX11, but probably using lower versions?
Older versions are starting to be pretty ancient by now, though. DX9's latest release dates back to 2010 (and it was first released in 2003...). I'm not much of a DX expert, but from my understanding, the API between DX9 and DX10 was completely changed and converting to DX9 would pretty much imply rewriting all the code. Not to mention worldwide Windows XP usage is down to 5%, so there is not much incentive for me to switch to DX9 at this point.
Also, just to know, is it the 8x resolution version of the filter that's running at 20fps on your computer?
If your CPU is a dual core though, it's very likely the filter is performing even worse than it would normally due to how the core is built. If the filter doesn't render the picture within 16ms, the emulation thread ends up doing a spinlock while waiting for the next thread (to keep latency as low as possible). With a dual core CPU, though, this means that the emulation thread is wasting a whole core while the other 2 threads are trying to finish rendering the picture - this is far from being an ideal scenario.
It's the speed of the 2x resolution, and yes, it's
dual core.
I'm trying to run Mesen on OSX using mono. Mesen version is 0.7.0, from the official website. Mono version is 4.4.2, installed SDL2. When I run "mono mesen.exe", it creates a file "libMesenCore.dll", and I get an error "Mesen could not launch because it was unable to load MesenCore.dll due to missing dependencies". Any hints on what to try?
If you're using a precompiled version, it's likely the standard C++ library on your system doesn't match what Mesen is compiled against. If you can, I'd suggest trying to compile Mesen yourself and see if that works. Also, keep in mind that I haven't done any OSX testing, so it is fairly likely that there may be issues (and I won't really be able to help fix them as I do not own a Mac)
Running 0.7.0 on Linux using Mono, and it crashed twice. One time, it crashed while paused in Super Mario Bros. while opening the video settings with a crash log, and the other time, I scaled the video down to zero, and then back up. It is also extremely jittery despite the emulator reporting a constant 60-61 FPS with and without VSync enabled.
Crash log:
Native stacktrace:
mono() [0x4b18ff]
mono() [0x42982c]
/usr/lib/libpthread.so.0(+0x11080) [0x7fe4f4381080]
/usr/lib/libc.so.6(+0x128952) [0x7fe4f3ee3952]
/home/syboxez/.local/bin/libMesenCore.dll(_ZN11SdlRenderer6RenderEv+0x121) [0x7fe4e737ad81]
/home/syboxez/.local/bin/libMesenCore.dll(_ZN13VideoRenderer12RenderThreadEv+0x26) [0x7fe4e73712e6]
/usr/lib/libstdc++.so.6(+0xbb970) [0x7fe4e6baa970]
/usr/lib/libpthread.so.0(+0x7454) [0x7fe4f4377454]
/usr/lib/libc.so.6(clone+0x5f) [0x7fe4f3ea37df]
Debug info from gdb:
terminate called without an active exception
=================================================================
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
EDIT: Jitteriness was caused by GNOME's desktop compositing. Tried in MATE, and it worked somewhat, but there was massive amounts of screen tearing, even with VSync turned on.
I just released version 0.8.0.
It includes a
lot of
debugger improvements, a few emulation fixes, and a handful of options to enable/disable model-specific behaviors (e.g noise channel loop flag, $2004 read behavior, oam addr bug, making the PPU reset or not when resetting console, nes-001 vs nes-101 open bus behavior for input ports, etc.)
Interesting you made some model-specific behavior settings. I've been thinking about this with regards to audio, since that's my biggest area of interest, but, like I've said before, I have no capability of programming it myself. You may have seen
this recent thread where Great Hierophant made some comparison recordings of NESes and I posted some of two revisions of the Famicom. You incorporated NTSC video simulation into your emulator, but I find it jarring to have it on and then have such bright audio that hasn't been smoothed by analog output. Would you be interested in adding EQ presets for different audio output methods?
With a suitable piece of audio, whether from a game or a test ROM (I think tepples might have some advice here on what to use), I could make recordings on hardware and through Mesen and give you comparative EQ values to create settings to make Mesen's audio sound like an early model and a later model Famicom over RF, a Twin Famicom over RF (which is essentially the same) and standard coaxial A/V cable, and an NES over both as well. I plan on buying an A/V Famicom sometime not too long from now, and I guess that would just leave the toploader NES to acquire. It would also be possible to make mixing presets for expansion audio to mimic hardware (roughly, since, as rainwarrior points out, different individual Famicoms have variances and heat differences can affect it as well). If you're interested, I'd be happy to contribute.
Also relevant to audio filtering:
this thread.
I have a hunch that we won't see the 75µs audio preemphasis filter in
any Famicom or NES RF modulator.
I'm more than happy to try - it would be pretty interesting to have sound presets for the different models.
However, just keep in mind that my sound processing skills are pretty terrible.
Everything sound-related in Mesen at the moment is pretty basic stuff (and the more "complex" stuff usually involved me googling until I found a suitable solution).
I'm sure I can eventually pull something off - I just don't know how much time it might take.
Maybe somebody here knows of some open source multiband equalizer program you could use or adapt. 30 bands should give pretty good precision, but I can give volume change values for even more than that. I know there are linear phase and minimum phase EQs out there, but I don't really have any advice on which you should seek for this purpose. Linear phase is usually used for just a few bands that cover a broad range of frequencies, so maybe minimum phase is more appropriate. I'm no authority though, I haven't gotten educated on the details of equalizers.
In the thread lidnariq linked to, tepples said, "Perhaps I need to write something that outputs pink (1/f) noise through $4011 for calibrating EQ." I know he and lidnariq are very knowledgeable, so if tepples thinks that's the best method and lidnariq agrees, if somebody can hook me up with a ROM that does that, I can load it onto my Everdrive and record it. I'll have to get a pin converter to use the Everdrive with my NES, though. So, addressing your statement that you don't know how much time it will take, I'm in no big rush. I want to see this happen, and if having recordings sooner rather than later might motivate to do it, I can do that, but I'm patient too. Take your time to figure out how this can best be done. I also need to get the blown capacitor or voltage regulator in my earlier model Famicom fixed before I can use it, so...
Well, in what must be record time (just four hours after my last post!) tepples made
a test ROM, so I felt I had to do something with it as proper thanks. I made some recordings and EQ profiles, with a Famicom and a Twin Famicom, and I figured I'd share an example result.
http://www.mediafire.com/file/h0e2rh90y ... arison.zipIn the zip is the title screen music of the US Legend of Zelda recorded from Mesen's sound recorder (Mesen original.wav), the Twin Famicom through its RCA audio out with a rather thick RG6 cable (Twin Famicom.wav, obviously), and then Mesen EQ.wav was created by matching the frequency spectrum of the pink noise recorded through the Twin Famicom to the spectrum of the pink noise recorded from Mesen and then applying it to Mesen original.wav. All three files are sample-aligned and volume matched with ReplayGain. I think you'll find that the Twin Famicom recording and Mesen with EQ applied sound very, very similar. Spectrum analysis also shows them to be extremely similar, as you'd hope, up to about 20 kHz, which is a limitation of my EQ plugin.
I found a tiny library that seems to work.
https://github.com/thedrgreenthumb/orfanidis_eqI've managed to make games sound like they have way too much treble, so I assume it's working!
It supports arbitrary numbers of bands (with 5, 10, 20, 30 presets) and lets you set the gain for each band. The built-in bands seem to go from 20hz to 20khz, but that's probably something I can change if need be (though that's pretty much the limits of the human ear so...).
Seems to support chebyshev type 1 & 2 filters, and butterworth (whatever those are!).
It seems to cause a ~15% performance loss in stereo mode. So half that for mono, and probably possible to make it a bit faster than what I just tried as a test, too. Either way, it's not that bad (especially for an optional feature).
At this point I guess I can setup the gains based on measurements to build some preset filters, and add an equalizer in the audio options, too.
I have no idea how to determine gain values for each band though (or for that matter, how to pick how many bands is appropriate)
You probably don't actually need a graphic EQ instead of just a few lowpass and highpass filters at the correct corner frequencies.
Although we have data about the
characteristics of the NES-001 RCA audio output, we've never sat down and done the same for RF output.
Quote:
Seems to support chebyshev type 1 & 2 filters, and butterworth (whatever those are!).
Pass band: Frequencies that you do want
Stop band: Frequencies that you DON'T want.
Ripple: Non-monotonic change in amplitude as a function of frequency
—
Butterworth: No ripple in either pass band or stop band; the steepest possible filter per order without any ripple
Chebyshev 1: Ripple in pass band, no ripple in stop band
Chebyshev 2: Ripple in stop band, no ripple in pass band
Elliptic: Ripple in both, steepest possible filter per order.
Filters with no ripple in the stop band rapidly block everything well below the noise floor. Filters
with ripple in the stop band have to choose just how much erroneous is permissible. Sometimes a sharper filter is worth it.
I attached images of the EQ curve produced when matching spectra using 30 points and 60 points. This is from Voxengo CurveEQ, by the way. 60 is what I used in my example audio file, but the curves look pretty similar and sound even more similar than they look. It's something that could probably be audibly approximated well with high-pass and low-pass filtering, the question is how much precision in replicating hardware you want. Although I know that's a pretty fuzzy thing itself, as this is just my individual hardware specimens recorded using my setup and there's gonna be a range of results with different ones. But you've already found an EQ library and EQs are more customizable, so let's work with that.
If there will be a configurable EQ for the user to play with, 60 bands would be rather unnecessary and pretty tedious to tune, unless there were a mode to move multiple sliders in tandem when moving one, as some EQs offer. 30 bands can be a bit much for a regular end user too, but I wouldn't want to go lower than 20 for presets as the curve loses a lot of detail with this example below that. Don't know how many bands would be necessary for hardware I haven't recorded yet, but I wouldn't expect drastic differences. RF appears to be an even simpler curve to replicate, so it probably doesn't get more complex than the one I've given images of. I'll order a pin converter so I can get NES recordings using my Famicom Everdrive and I'll cruise eBay for deals on an A/V Famicom or toploader NES.
20 Hz to 20 kHz is the range CurveEQ covers as well, so that's cool. I can give gain values for any point along that spectrum, but we can just go with the frequency bands CurveEQ uses when set to the number you decide to use. I'd recommend choosing 20, 25, or 30 bands. Once you settle on a number, I'll give you the list of frequencies the bands are set at and the gain adjustments for the different hardware and output methods. The best practice would probably be to do all negative dB values to ensure there can't be any clipping, but I wonder if people might dislike the volume decrease and choose not to use it because of our ears' natural "louder is better" bias. Oh well.
Labels for the two Y axes? I tentatively think the right one should be decibels, but...
Also, I'm surprised by the ripple you're showing in the passband. Could you describe your entire analysis setup?
lidnariq wrote:
Butterworth: No ripple in either pass band or stop band; the steepest possible filter per order without any ripple
Chebyshev 1: Ripple in pass band, no ripple in stop band
Chebyshev 2: Ripple in stop band, no ripple in pass band
Thanks for the explanation - so in this case Butterworth would probably make the most sense?
nothingtosay wrote:
If there will be a configurable EQ for the user to play with, 60 bands would be rather unnecessary and pretty tedious to tune, unless there were a mode to move multiple sliders in tandem when moving one, as some EQs offer. 30 bands can be a bit much for a regular end user too, but I wouldn't want to go lower than 20 for presets as the curve loses a lot of detail with this example below that.
Realistically, 20 is pretty much the limit of what I could fit into a config screen and keep it fairly usable - if 20 bands gives enough precision, it might be best in this case.
Otherwise, there's always the possibly of having preset filters with 30+ bands and then offer a more limited EQ in the UI (e.g 10 bands) - only downside being I couldn't visually show what the presets are on the EQ sliders.
nothingtosay wrote:
The best practice would probably be to do all negative dB values to ensure there can't be any clipping
I agree - there may already be some conditions where the output comes close to clipping, so negative DB values would make the most sense.
Sour wrote:
Thanks for the explanation - so in this case Butterworth would probably make the most sense?
Really depends. Given that nothingtosay's EQ shows ripple in the passband, it might actually make sense to use some Chebyshev 1 filters after all. I don't know.
It
looks like there's a 3rd order lowpass starting at 2kHz and a 1st order highpass starting at 80Hz, but I'd really like more details on the measurement setup first.
I don't actually distrust the corner frequencies, but I am shocked at just how much of the higher frequencies are being thrown away...
lidnariq wrote:
Labels for the two Y axes? I tentatively think the right one should be decibels, but...
Also, I'm surprised by the ripple you're showing in the passband. Could you describe your entire analysis setup?
I believe the numbers on the right don't actually apply at all to what I'm measuring there. They're for something else that can be displayed in the same window. It's like RMS decibel levels for every frequency band in a recorded spectrum or something, and there's normally a transparent outline of it but I turned it off for clarity. I should have cropped those numbers out. The pertinent numbers are on the left side and they're also decibels. Looking at the right is why it seemed like the low-pass is third-order, so I apologize. It's definitely not that severe!
The recording that produced those curves was made from a Turbo Twin Famicom's RCA output over a 3-foot double-shielded RG6 coaxial cable, which I know is way overkill, especially for that distance, but I think then you can be pretty sure it's affecting the signal only minimally. It goes into the left channel analog input of my M-Audio 2496 sound card and is recorded in Sound Forge 11 at 48 kHz. The card's specifications say "Frequency Response: 22Hz-22kHz, -0.4, +/-0.4dB" but I suppose I could actually put it to the test. I don't doubt that there could be slightly more accurate recording devices, but for what it's worth this sound card is pretty highly regarded and recommended (although I know audiophiles aren't exactly the most trustworthy group when it comes to the scientific aspects of sound) and I don't think it's responsible for a large treble loss. That little uptick at the top end of the frequency range is actually caused by Mesen having a significant drop in volume up there.
I recorded more than 20 seconds of pink noise from the test ROM, and did the same in Mesen. Then I captured an average of the spectra of my recordings, which were more than long enough for the average to stabilize. Then I used CurveEQ's match spectrum feature which allows you to apply a reference frequency profile to another and those curves show how the subject (Mesen) has to be changed to take on the characteristics of the reference (Twin Famicom). It seems to me that it did a pretty good job at that on my processed example file.
Sour wrote:
Realistically, 20 is pretty much the limit of what I could fit into a config screen and keep it fairly usable - if 20 bands gives enough precision, it might be best in this case.
Otherwise, there's always the possibly of having preset filters with 30+ bands and then offer a more limited EQ in the UI (e.g 10 bands) - only downside being I couldn't visually show what the presets are on the EQ sliders.
Yeah, 20 should be fine. I think that combined with the presets would be a very good place to be in. People can use a preset as a starting point and then quickly customize to their preference a handful of bands they dislike if they want to. I decided I don't quite like CurveEQ's default bands at 20 points since it has more at the bottom and fewer at the top than I'd like, so I went with Sound Forge's own 20-band EQ's values (except I cut two in order to add 17.5 kHz and 20 kHz to capture that uptick) and got the values for each frequency. Let's try this:
20 Hz: -4.4
40 Hz: -2.2
56 Hz: -1.0
80 Hz: -0.7
113 Hz: -0.3
160 Hz: 0.0
225 Hz: -0.2
320 Hz: -0.3
450 Hz: 0.0
640 Hz: -0.3
1.0 kHz: -0.4
1.8 kHz: -0.9
2.5 kHz: -1.7
3.6 kHz: -2.8
5.1 kHz: -4.4
7.2 kHz: -6.5
10 kHz: -8.3
15 kHz: -10.3
17.5 kHz: -10.9
20 kHz: -9.4
I assume you don't want to do a release with only a Twin Famicom A/V preset and no NES or first-party Nintendo Famicom ones, but if you could give me a build with the EQ, I'd like to play with it and see what I think of using these bands.
nothingtosay wrote:
That little uptick at the top end of the frequency range is actually caused by Mesen having a significant drop in volume up there.
Ok, so this is actually a graph of the differences in magnitude at each between your recording and the raw audio out of Mesen. Any chance you could share a graph against an ideal pink noise curve also?
I don't know if Mesen is currently doing any filtering...
Just to be overly specific, this is NOT the modulated RF path? just baseband line-level audio?
lidnariq wrote:
Ok, so this is actually a graph of the differences in magnitude at each between your recording and the raw audio out of Mesen.
Right.
lidnariq wrote:
Any chance you could share a graph against an ideal pink noise curve also?
Assuming Sound Forge's pink noise generator is ideal, I've attached the difference curves for Mesen and the Twin Famicom for good measure. But the pink noise from the ROM would also have to be the same for this to be a perfectly fair comparison.
lidnariq wrote:
Just to be overly specific, this is NOT the modulated RF path? just baseband line-level audio?
Correct, this is not RF. I have recorded that, but haven't posted a graph. It's from the standard RCA audio jack of the Twin Famicom.
nothingtosay wrote:
I assume you don't want to do a release with only a Twin Famicom A/V preset and no NES or first-party Nintendo Famicom ones, but if you could give me a build with the EQ, I'd like to play with it and see what I think of using these bands.
I'm in the middle of optimizing a bunch of stuff to try and squeeze a bit more performance out of Mesen at the moment, but I'll try to get you a build with a basic EQ UI over the weekend.
lidnariq wrote:
I don't know if Mesen is currently doing any filtering...
It might - I use blargg's blip_buf C library to resample the APU's output to the target sample rate and feed blip_buf's output to DirectSound/SDL. Taking a quick look at blip_buf's code, it seems to have (at the very least) a high-pass filter built into it.
nothingtosay wrote:
Assuming Sound Forge's pink noise generator is ideal
Also assuming the test ROM is ideal. For one thing, pink noise is tricky to generate in an environment where you can't, say, run a sixth-order IIR filter. For another, I am potentially imperfect.
In particular, I haven't thoroughly tested the ROM by running it in a 6502 simulator, logging $4011 writes, and doing big FFTs to ensure a 1/f power spectrum density. Instead, I recorded FCEUX's output in Audacity, and when it looked close to -10 dB/decade, I deemed it close enough.
Here's a build with the (unfinished) equalizer UI:
link (windows-only build)
It allows you to set all 20 bands from +20dB to -20dB (200 = +20, -200 = -20) and select the filter type ("None" will disable the equalizer completely).
Seems to be working fairly well - at the very least, the output seems to be a lot closer to your recording with the DB values you gave me.
Cool, thanks for doing that! I recorded the preset with each type of filter. The attached curves show deviation from the hardware recording of the Zelda title screen music. Butterworth allows the closest representation with all frequencies being less than plus or minus 2 decibels off from the reference. Chebyshev 1 is about plus or minus 5 decibels, so that's pretty significant, and then I didn't even bother measuring Chebyshev 2 because if you switch to it it's immediately apparent that it's quite wrong.
I know it'll never be exact with 20 bands, but I figured I'd try to compensate for some of the deviations when using the Butterworth filter. The third attached picture is the same as the second one but zoomed in. The fourth is the result of my efforts at adjusting and you can see it swings up and down over mostly a bit smaller range. The largest valley is at 3 kHz despite us having bands at 2.5 kHz and 3.6 kHz, so you'd think there wouldn't be such nonlinearity in the middle between them, but there it is. That spike up at 12.5 kHz (the tallest one on the right side) makes me think it might be better to delete the 20 Hz band, since it's relatively smooth down there and barely audible even with a subwoofer, and add one at 12.5 kHz since that's much more audible and useful for adjusting treble. Would you mind making that adjustment and letting me test it?
But in the end, this still sounds close as it is, despite the appearance of the curve. And it's not as though every other Twin Famicom is gonna sound exactly like mine so it doesn't need to be a perfect match. RF will have even more tolerance for imprecision since it's not clear what an ideal demodulation method is, if there is one at all. My demodulator is a surprisingly pricey dedicated device ($80 for the model with analog output and $130 for HDMI output; I have the HDMI model) and is probably better than most of the ones built into TVs, so the preset we'll make based on my recording may even sound optimistic compared to what most people experienced back in the day.
P.S. My order for a pin converter is placed and I assume should get to me next week; you can count on NES frequency profiles when it does. The week after that I should get an A/V Famicom in the mail, and that weekend I'll get paid again and buy an NES-101 from a local store. Not sure there'll be big differences compared to other models using the same output method, but there's only one way to find out.
I tried a few different games, and haven't been able to hear any change from the 20Hz, 17.5kHz and 20kHz bands.
40Hz is just barely audible with my sub woofer, 15kHz has a pretty audible effect on the sound.
Is there any value in keeping the 17.5kHz and 20kHz bands? Is this just a case of my speakers/sound card/ears not being able to hear the difference?
If not, we could remove those and add 3 more bands to try and smooth out the deviations as much as possible.
Back when I was in college, I tested myself and couldn't hear a sine sweep above 17 kHz. So it might be your ears.
True, way up there isn't very audible, unless you're boosting significantly, otherwise the frequencies get masked by the lower, more audible ones. Plus, many people won't be able to hear 17.5 kHz basically at all and 20 kHz even less. Those bands were necessary for reproducing the shape of th curve at that point, but that's pretty low priority when you can't hear it. So you could try eliminating those bands and rearranging things a bit and, starting after the 1 kHz band, have 2 kHz, 3 kHz, 4 kHz, 5 kHz, 6 kHz, 7 kHz, 10 kHz, 12.5 kHz, and 15 kHz. That would address the largest peaks and valleys, though I wouldn't be surprised if it creates more ripples. But let's give it a shot and see what happens.
I tried a couple of online tests, and it seems like I can't hear anything beyond 15.1kHz or so.
Here's another build:
linkI removed the 20Hz, 17.5kHz, 20kHz bands, rearranged the 1+kHz bands like you said, and added another band between 450hz & 1khz:
Before: 450, 640 After: 450, 600, 750
(I didn't change the default dB gains, so the default values don't make sense anymore)
Using those bands, most ripples are a bit smaller, but it also made bigger ones at 840 Hz and 6.3 kHz, plus the differences at the bottom and top end are bigger, but that's expected and doesn't really matter. I'll see what I can do about those ripples, but I think these bands are more useful for users, so you should probably just go with these. But this has got me thinking, you said it would be possible to also do presets with a greater number of bands than the EQ presents to users. I'm curious how faithfully 60 bands would work with this EQ. Maybe smaller changes over more closely spaced bands will minimize nonlinearities. If it's not too much work to add and too tedious to put in all those frequencies and decibel values, could we try it?
Here's how I envision you could incorporate it into the GUI. You could make another drop-down menu to the right of the one for filter type where people could choose presets. There would be standard 20-band presets that move the sliders. Like I've said, I think these are good for people to use as a starting point. Say a user wants to make it sound something like the real thing, but they just think the treble could use some tweaking, so they select the preset and change a few bands rather than having to manually set all 20 and trying to figure out by ear where to set the other bands to sound the same. Then you could have in the same drop-down menu options that are named the same as the standard ones, but with something like "(High Precision)" appended, and selecting that would disable the sliders and a 60-band preset would kick in.
Here's an unimportant little semantic dilemma I have: what do you name the output method I've been using, the standard white RCA jack. When referring to video, it's composite, but that's a term specific to video. Sometimes people say A/V, but that's vague, refers to video when there is no video aspect involved here, and might be confusing when one preset is "NES A/V" and another is "A/V Famicom". You plug a coaxial cable with RCA plugs on the end into it, but people don't commonly refer to that kind as coaxial cable and instead use that term for the cables typically used to carry cable/satellite television signals that have F-type plugs on the end... like the NES and some Famicom RF adapters use. But only one end of the RF adapters has an F-type connector, the other end on both Famicom and NES is (drum roll) RCA.
What the hell can it be called that will convey to a layperson what it is and prevent confusion? Maybe it's just not necessary to say anything and it'll be assumed that output type is the default for systems that have it and RF is the unusual one that needs to be specified. So in this case, the preset would just be "Twin Famicom".
Anyway, here are the 60 values if you're willing to try my suggestion.
Code:
20.0 Hz: -4.4
22.5 Hz: -4.1
25.3 Hz: -3.8
28.4 Hz: -3.5
31.9 Hz: -3.1
35.9 Hz: -2.8
40.4 Hz: -2.3
45.4 Hz: -1.9
51.0 Hz: -1.6
57.4 Hz: -1.3
64.5 Hz: -1.1
72.5 Hz: -0.9
81.5 Hz: -0.7
91.6 Hz: -0.7
103 Hz: -0.6
116 Hz: -0.6
130 Hz: -0.4
146 Hz: -0.1
165 Hz: 0.0
185 Hz: -0.2
208 Hz: -0.3
234 Hz: -0.4
263 Hz: -0.3
295 Hz: -0.5
332 Hz: -0.3
373 Hz: -0.4
420 Hz: -0.2
472 Hz: -0.2
531 Hz: -0.2
596 Hz: -0.3
671 Hz: -0.3
754 Hz: -0.4
848 Hz: -0.4
953 Hz: -0.3
1.07 kHz: -0.2
1.20 kHz: -0.6
1.35 kHz: -0.5
1.52 kHz: -0.7
1.71 kHz: -0.7
1.92 kHz: -1.1
2.16 kHz: -1.4
2.43 kHz: -1.6
2.73 kHz: -1.9
3.07 kHz: -2.3
3.45 kHz: -2.7
3.88 kHz: -3.1
4.37 kHz: -3.6
4.91 kHz: -4.3
5.52 kHz: -4.7
6.20 kHz: -5.4
6.97 kHz: -6.2
7.84 kHz: -7.0
8.81 kHz: -7.8
9.91 kHz: -8.2
11.1 kHz: -8.8
12.5 kHz: -9.4
14.1 kHz: -10.0
15.8 kHz: -10.6
17.3 kHz: -10.9
20.0 kHz: -9.5
I'll give you an updated 20-band set with the new frequencies next time.
nothingtosay wrote:
what do you name the output method I've been using, the standard white RCA jack.
Without getting
way too jargony, I'd call it "line level mono output"
Here's a
build that contains a 20-band (customizable) preset, and a 60-band fixed preset.
With this, the "fixed" presets can technically have any number of bands (and the number of bands can change from one preset to another).
The customizable presets need to use the 20 bands shown in the UI, though.
Obviously, more bands means worse performance - so 60 bands is probably as high as we should go.
UI-wise, I'm unsure of the value of the "filter type" dropdown (butterworth, etc). Most users will likely find it confusing, and may end up picking a less-than-optimal filter for the preset they've selected. I'd be tempted to force the butterworth filter at all times?
Also, I fixed a bug in the EQ library that caused performance to plummet when the game played no sound/music (FPS would drop to 20-30).
I've got the curve of how Mesen using the 60-band preset deviates from the hardware recording. Not bad at all. The picture is zoomed in to show a only 6 dB range, so it's actually much smaller than most of the previous ones. All ripples occur over a range of less than 2 dB and there's only one comparatively large one and it's in a mostly inaudible place. Could you try changing the 15.8 kHz band to -9.6 dB so I can see if that evens it out?
I wonder if high-pass and low-pass filters wouldn't replicate this particular preset in a more linear way while also being more efficient if this uses a ton of CPU, but there's no guarantee the other hardware and output methods, especially RF, could be approximated faithfully with simple filters. I guess we'll see, but for now this sounds quite good and perceptually accurate, even if you can measure the difference.
I agree it probably would be best to just use Butterworth and eliminate the others. Not much reason to have more unintended rippling when adjusting your EQ.
In the 20-band EQ for the Twin Famicom preset, I can just about eliminate the large ripple at 6.3 kHz by adjusting the 6 kHz band, but there isn't one close enough to 850 Hz to fix the one there, and that one is a drop of 5 dB, which is a pretty big volume difference. I wouldn't want to eliminate any of these bands at this point, so I don't suppose you could space the sliders a little closer together and add one more in each row? 850 Hz should be one and then basically the only other areas the frequency response differs much is at the top and bottom, so either 20 Hz or 17.5 kHz. I'd probably go for the latter.
Sorry to ask so much of you. I'm grateful that you're willing to go to these lengths for this pet goal of mine.
I updated the last download link with a new build.
Run it once, and then close Mesen again - if you go in Documents\Mesen and open up the settings.xml file, you should see tags named <TwinFamicom60Bands> and <TwinFamicom60Gains> - these define the bands and gains for the 60-band preset. You can change these while Mesen is closed to customize the preset manually. (The BandXXGain tags are for the custom preset in the UI, so those aren't the ones you want to adjust). You can also add/remove bands if you want, but you need to make sure you have the same number of values in TwinFamicom60Bands and TwinFamicom60Gains otherwise it might crash.
The EQ has more end-user value than low/high pass filters, though. Since it doubles up as a way to customize bass/treble, etc. Performance-wise the 60-band presets do make the max FPS drop quite a bit (varies by game), but even then the slowest games I know of (MMC5 games) are still able to run at 180FPS on my 7-year-old CPU, so it's not really that much of an issue (considering this is an optional feature - much like the video filters)
For the 20-band presets, let's wait until you have measurements for other consoles - even if I were to add 2 more bands for the twin famicom preset, we may end up needing more bands (at other frequencies) for other presets down the road, so it may be best to get some more presets tested out and then see if/where we need to adjust bands overall? (and if it can be done without adding too many bands). Also, technically you could modify the 60-band preset in the settings.xml file to be a 22-band preset that matches this - so you should be able to try it out on your end pretty easily.
Okay, that's really great! I'm happy to mess with the EQ myself instead of troubling you each time I try to play Whack-A-Mole with a ripple. I got my pin converter. Gotta find where my NES's power supply went and then I'll make recordings with it. My A/V Famicom has also entered America and that should be here soon, so I'll report back with my findings soon. Thanks for all this stuff, Sour.
Just released 0.8.1.
The main changes are:
-Lots of code optimizations that make Mesen 15-35% faster (depends on game) than before. This makes it more or less even with FCEUX when using the "New PPU" option (overall, some games are faster in Mesen, others faster in FCEUX).
-Audio equalizer (no presets available just yet)
-PAL/MMC5/FDS emulation accuracy fixes
-Support for UPS/BPS patch files
-Option to enable OAM RAM decay emulation (decays to $FF after 3000 cpu cycles with no read/write access, roughly 1.7ms)
At least with my 2C02G in my NES-CPU-07, its OAM DRAM seems to decay to a value of $10 for both X and Y coordinates. (I don't know about the other two bytes per sprite entry).
Ah, that's good to know. I picked $FF based on thefox's earlier observations:
Quote:
(sprites started disappearing, which must mean that the bits tended towards ones (EDIT: or zeroes, if TV/capture card wasn't displaying the top 8 pixels).
Though that quote does imply it could also have been $00.
$FF has the benefit of making the sprites disappear entirely, which makes it very obvious that something is wrong (from the perspective of a game developer I mean). But either way, it might be a good idea to add an option to select the actual value the RAM decays to.
I just noticed this as I was setting up Mesen for the first time:
In the Video Options window, the Aspect Ratio setting has the following:
- Auto (results in square pixels for me)
- NTSC (8:7)
- PAL (11:8)
- Standard (4:3)
- Widescreen (16:9)
If you're intending for NTSC and PAL to be accurate to hardware, you are actually interpreting "8:7" and "11:8" the wrong way. These are not
display aspect ratios, they are
pixel aspect ratios. By interpreting them as display aspect ratios, you are making the display thinner than it really should be.
For example, NTSC at 2× is giving me 548×480, which would require a pixel aspect ratio of 15:14. In an actual Famicom and NTSC NES, 8:7 is the
pixel aspect ratio, so the
display aspect ratio is 128:105 (1.219). When I select 2×, the display should actually be 585×480, not 548×480. Note that this is not counting the border regions at the left and right.
PAL at 2× is giving me 664×480. It should be 709×480. The pixel aspect ratio in PAL is 2,950,000:2,128,137 (approximately 11:8), which would make the display aspect ratio 9,440,000/6,384,411 (1.479). Therefore, the 2× resolution should be 709×480.
EDIT: Wow, I've already gotten used to apostrophes as digit separators in C++, and they're seeping into my posts! By the way, more on pixel aspect ratio vs. display aspect ratio
here.
And if you need a test case, you can tell that the video output stage is using the correct 8:7 or 11:8 sample aspect ratio if the circles in the "Linearity" test of
240p Test Suite are circular.
Thanks - I always had a suspicion that this was incorrect, but never really took the time to look into it.
I'll take a look at it when I get a chance, but in the meantime, you can use the "Custom" option in that dropdown to specify any ratio you want manually.
Here's an example. I own a full HD monitor (1920x1080 pixels). Well, assuming 256x240 pixels (no overscan, no cut-off scanlines), if I run my emulator using a PPU output of 512x480 pixels in this display, the aspect ratio is 16:15 (pixel perfect). If I change the monitor's resolution into 640x480 pixels, the aspect ratio is no more 16:15 - I don't know the calculations now, just pointing out the non-standard resolution (640x480) in a full HD monitor changes the PPU output aspect ratio, obviously.
In short words, I don't mind about the display aspect ratio. A NES image upscaled to 640x480 is 4:3. If I'm running over a native PC monitor resolution (let's say 1920x1080), the aspect ratio is perfectly fine. By displaying the PPU image over any other non-standard PC monitor resolution will break the default PPU image shape, as more (or less) wide.
It was a simple fix in the end, I just replaced the ratios I was using with the ones you gave me. 2x NTSC now outputs 585x480 and 2x PAL is 709x480.
With that, the circles in the 240p test rom are perfectly circular - looks a lot better than the older ratios. :)
Thanks for letting me know about this!
Hello, I just wanted to ask if Mesen is one of the more accurate NES Emulators, why is it not mentioned on TASVideos's NES Accuracy Test Chart? Just asking, not trying to be a rude asshole.
We've recently seen the release of a bunch of brand new emulators (Any-Yes, Mesen, Nintaco), all of which were (I believe?) developed against our modern set of tests, so should be pretty accurate.
BaconIsGood16 wrote:
why is it not mentioned on TASVideos's NES Accuracy Test Chart?
Because that chart is
years out of date and Mesen is a relatively new emulator.
The debugger in Mesen is very well done, but there are some features it doesn't seem to have that would be helpful for me.
When the debugger is open, Pause is disabled in the main emulator window. This makes things difficult if I want to break while something is happening on-screen, as I have to quickly switch to the debugger window, open the Debug menu, and choose Break. It would be nice if Pause caused the debugger to break instead when it's open. It'd also be nice to have an option to break whenever the debugger window becomes active.
Controller buttons can't be set to a certain state in the debugger, so there seems to be no way to, for example, keep the A button held while debugging. Something like checkboxes representing each button would work, at least for normal controllers.
Values in the Watch pane are only updated while breaking. It'd be nice to have it update every frame while the game is running too, or have an option for that.
It'd be nice if blocks in the debugger could be explicitly marked as data bytes or words, and show up as .db or .dw lines in the debugger even when "Show Only Disassembled Code" is checked.
Related to that, though this is a bit niche in comparison: the FDS BIOS has several functions that take pointer arguments inline after the jsr. Though you have the labels already, it'd be nice if it could actually show up as jsr LoadFiles / .dw $1234, $5678 and so on automatically.
Nicole wrote:
Related to that, though this is a bit niche in comparison: the FDS BIOS has several functions that take pointer arguments inline after the jsr. Though you have the labels already, it'd be nice if it could actually show up as jsr LoadFiles / .dw $1234, $5678 and so on automatically.
There are a bunch of places throughout NES-land that do function parameters that way.
nescom's clever-disasm calls it a TrailerParamRoutine.
Thanks for the feedback!
Breaking when trying to pause would make a lot of sense - I'll add that. Breaking on debugger focus too (as an option).
Being able to set the controller's input is something someone else has asked for, too. It's mostly a matter of coming up with a proper UI and finding space to fit it in, too - I can't really add too much to the main debugger window if I want to keep it usable on lower resolution laptops. It would probably need to be some sort of separate tool window.
Also, if you turn on the "Allow input when in the background" option, you can use a gamepad to hold down keys while in the debugger (won't work with a keyboard). It's far from ideal, but it's a workaround for that limitation atm.
An option to automatically update the watch should be fairly easy - initially I hadn't done that for performance concerns, but a lot has been done to speed up the expression evaluator since then, so it should be fine to update it once a frame or so (assuming there aren't hundreds of watch expressions)
Forcing specific blocks to show as bytes or words might be a bit tricky since there are already so many other options to control what is shown or not. Maybe the best way would be a general option to "Always show verified data" that would work with the other disassembly options + 3 actions: "Mark as data (bytes)", "Mark as data (words)" and "Clear data flag"? Would that be flexible enough for what you need to do?
As for arguments after JSR, I guess I could add some options to labels that would allow you to configure those (e.g name & size of each parameter)?
Is there a way to turn off the Recent Game Selection feature?
uVSthem wrote:
Is there a way to turn off the Recent Game Selection feature?
No, there isn't. Is there any reason you would need to turn this off?
It's a feature I never use. I'm not a save state kind of guy. I would rather turn it off and save a few writes to my drive.
uVSthem wrote:
It's a feature I never use. I'm not a save state kind of guy. I would rather turn it off and save a few writes to my drive.
I doubt it'll have much of an impact on even an SSD's longevity (an hour of internet browsing most likely writes more cache data on a disk than Mesen would if you used it daily for a year), but I added an option to disable the screen (and turn off the disk reads/writes associated with it) along with another option to reset the game instead of resuming where you left off when clicking on a game (which is something other people have requested).
Thank you.
Sour wrote:
uVSthem wrote:
It's a feature I never use. I'm not a save state kind of guy. I would rather turn it off and save a few writes to my drive.
I doubt it'll have much of an impact on even an SSD's longevity (an hour of internet browsing most likely writes more cache data on a disk than Mesen would if you used it daily for a year), but I added an option to disable the screen (and turn off the disk reads/writes associated with it) along with another option to reset the game instead of resuming where you left off when clicking on a game (which is something other people have requested).
I just released
0.9.1.
0.9.1 contains a number of improvements and new features related to HD Packs (including a tool to dump graphics into PNG files, new additions to the HDNes format that support conditional tile replacements, etc.)
On the debugger side of things (which might be what's most interesting to people here), some new features:
-A "Step Back" action that allows you to step backwards into the execution and rewind time, 1 CPU instruction at a time. (thanks to thefox for the idea!)
-The ability to setup controller input from the debugger's main window:
Attachment:
DebuggerInput.png [ 8.46 KiB | Viewed 2744 times ]
You just need to click the buttons on the controller's image to activate/deactivate buttons. If any button is turned on via the debugger, the emulator will ignore all regular input for that player.
-Filtering in the trace logger via conditions - which means you can use the conditional expressions the debugger supports for breakpoints/etc to filter the trace log's output. e.g, if you wanted to log any writes for $8000-$FFFF, you could type in "IsWrite && Address >= $8000" to get only those:
Attachment:
TraceLogger.png [ 103.35 KiB | Viewed 2744 times ]
I also realized the Trace Logger's UI had terrible performance on Linux under Mono (at least on my VM), so I scrapped the standard textbox I was using for it and replaced it with the custom control I use for the code window. It should be at least an order of magnitude faster than before, and now supports auto-refresh (at 300ms intervals) with 30k instructions shown in the list (which was impossible even on Windows before).
On top of those, I implemented a few of the options that were asked by Nicole some posts above (e.g option to refresh the watch window while the game is running, option to break the debugger on focus, and allow breaking the debugger by using the Pause shortcut from the main window)
It also includes a number of bug fixes for the debugger in general and the emulation should run faster than before with the debugger attached (10-15% improvement).
If you find any issues or have any suggestions (for the debugger or otherwise), let me know!
There's one fundamental thing I still can't find anywhere in Mesen. Controller II microphone input used in Raid on Bungeling Bay, Japanese Zelda, Paluthena etc.
List of known games that uses or requires the micIf possible I'd request to make it possible to assign the mic to its own separate input, not like in FCEUX where it replaces the controller II START button when enabled. Makes it hard to test both inputs at the same time in my game.
Or is it already supported and I'm just blind?
No authentic console has both the microphone and player 2 Start. The RF, Twin, Titler, and TV models have player 4 Start (bit 1 of fourth read of $4017), but you can map that separately in (for example) FCEUX for Windows by enabling the Famicom 4-player adapter. The AV Famicom lacks a mic.
Or are you referring to famiclones that have both a mic jack and detachable controllers, such as the Analogue Nt mini?
I know there's no official Famicom or NES that have both, but for an emulator I see no reason to not have it all. It's convenient for developing purposes, allows testing setups that's not as easy to do on real hardware and I really see no problems with it. Nestopia has it.
Of course it's bad design for compatibility reasons to make a game that requires the use of either the mic or the controller II START/SELECT buttons, and I don't intend to do that.
Not sure what you mean by having 4-player adapter though? I do have that enabled in FCEUX and I still can't use the mic and con II Start at the same time.
Map a key to player 4 Start instead of player 2 Start, and use a controller reading routine that effectively ORs together the bits from controllers 1 and 3 and from controllers 2 and 4. That way, your game logic sees both the mic and player 2 Start.
Ah now I see what you mean. Yeah I already do that in my games so I can use expansion port controllers.
This won't work in a 4 player game though, and I think it's really unrelated to the emulator inputs that should preferably be separate.
Pokun wrote:
There's one fundamental thing I still can't find anywhere in Mesen. Controller II microphone input. If possible I'd request to make it possible to assign the mic to its own separate input, not like in FCEUX where it replaces the controller II START button when enabled.
You're not blind - there's no microphone support yet. It's been on my list of things to do for a while (along with adding support for all the other types of controllers that Mesen doesn't support yet) - the microphone should be pretty simple to add though, I'll try to get it done over the next few days.
That would be helpful. Mesen is such a good and useful emulator with many features not found elsewhere.
I just released
0.9.2 which adds/improves a number of things in the debugging tools:
-Added a "Developer Mode" option in preferences that moves all debugging tools to the "Debug" menu in the main window (and all debug tools can now be opened without opening the main debugger window first)
-LUA scripting
-Syntax highlighting in the assembler (and some bug fixes)
-A few new small features/improvements in the PPU viewer
-iNES header editor
-Integration with freem's ASM6 fork - ASM6f can now export .mlb & .cdl files which can be imported into Mesen's debugger to get labels/comments and code/data analysis
-Changes done to PRG ROM or CHR ROM via the debugging tools (e.g chr editor, assembler or hex editor) can now be saved as an IPS file
And then some non-debugger stuff:
-Added a first-run setup configuration dialog to choose between portable mode or store-in-profile mode
-Added some options to select paths (and toggle between portable/profile modes)
-Some fixes for HD packs and support for replacing background music and sound effects with ogg sound files.
-Support for the famicom's 2P microphone
-Improved startup performance
I'm on Debian 9, and I ran this:
Code:
sudo apt install libsdl2-2.0-0 mono-runtime
mono ~/Downloads/Mesen.exe
sudo apt install libmono-system-windows-forms4.0-cil
mono ~/Downloads/Mesen.exe
libmono-system-io-compression4.0-cil
mono ~/Downloads/Mesen.exe
Now we're getting somewhere. It asked me whether I wanted to put the settings in the same folder as the executable or in my profile, the latter being
~/Mesen. In GNU/Linux, it's common to put data specific to a particular user and application in a directory whose name begins with a dot so that it doesn't clutter the home directory. One possibility is
~/.config/Mesen.
After all that, it gave me an alert box with the following text, which I had to retype because the alert box did not let me copy its text:
The Microsoft .NET Framework 4.5 could not be found. Please download and install the latest version of the .NET Framework from Microsoft's website and try again.
This gives me no information as to what other
libmono-system-*4.0-cil packages I need to install? Is there a verbose option of some sort to see what library isn't being found?
Mesen README states that
mono-complete is required. Yet APT says this is a 150 MB install on top of
mono-runtime, S.W.F, and S.I.C. Should I just swallow this and be thankful it's smaller than Wine? I have to keep Wine around for FamiTracker anyway even if I do eventually phase out FCEUX debugging version in favor of Mesen. A 100 MB runtime here, a 100 MB runtime there, and pretty soon we're talking real disk space. So to answer your question
elsewhere, Mesen is fine if you already have
mono-complete installed to run another application. I guess FCEUX might be better if you already have
wine installed to run another application but not
mono-complete.
I bit the bullet and installed
mono-complete, and the main window opened. But that wasn't what I expected to happen, as I typed this command:
Code:
mono ~/Downloads/Mesen.exe --help
The
--help,
-h, or
/? is supposed to print a summary of commands to
Console.Out and then exit.
It ran
The Curse of Possum Hollow on the first try. NES 2.0 ✓
Time for
Zap Ruder. "Light Detection Radius": That's fun
240p Test Suite time. Looking at the circles in "Linearity", I see that the PAR isn't set by default (unlike, say, NO$SNS). In Video > General, I see how to set the PAR, and now they're rounded.
The "NTSC" filter works on the "Intel(R) Core(TM)2 Duo CPU L7500 @ 1.60GHz" in my ThinkPad X61, but NTSC 2x (Bisqwit) slows down.
Mono doesn't respect the GTK+ system font. It's apparently
using SystemFonts.DefaultFont, which is
hardcoded as Microsoft Sans Serif.
Now time to install it.
Code:
mv ~/Downloads/Mesen.exe ~/.local/bin/
echo '#!/bin/sh' > ~/.local/bin/mesen
echo 'mono $HOME/.local/bin/Mesen.exe "$@"' >> ~/.local/bin/mesen
chmod +x ~/.local/bin/mesen
So should I be putting feature requests in GitHub or elsewhere?
Thanks for taking the time to try it out.
For the application folder, would /home/.mesen be appropriate? Or is it common to use the .config folder and multiple applications do so?
I agree Mesen is not ideal on Linux due to its dependency on Mono. I'd have to take a look at what specific packages from mono it requires - off the top of my head, the runtime + winforms + compression (which you listed) should be the main ones, but it is possible that something else is needed as well. The readme asks for mono-complete because that is the simplest/shortest way to list the requirements.
Not being able to copy the message box is probably a Mono bug - as far as I know, message boxes in WinForms rely on the Win32 API, which always supports Ctrl-C as a way to copy the text?
That message box should have showed the actual error, though (most other popups that are the result of an exception do display the exception's details) - that's my bad, I'll fix it.
Good point about --help - I can see if I can fix it, but I vaguely recall trying to implement that and failing because getting a .exe marked as a non-console application to display anything in the command line on Windows is actually hard or impossible (if I am remembering this right). And marking the executable as a console application has its own downsides as well.
Bisqwit's NTSC filter is very slow in comparison to the "NTSC" filter (which is blargg's) - on a dual core processor, this might actually be even worse due to the filter's code attempting to render the picture using 2 different threads + a 3rd thread running the actual emulation.
Mono probably forces Microsoft Sans Serif because that's the default WinForms font, and a lot of programs would likely render incorrectly if the font's size changed from what they expected (e.g if the software is using hardcoded sizes in pixels for layout, etc.) - at least, that's what I would guess.
It's nice to know the precompiled .exe file actually runs on Debian, though! I don't think I had managed to make it run on Debian in my own tests (iirc, only Ubuntu, which I use to compile, and Fedora Core worked for me - but I am far from being a Linux expert).
Putting feature requests on GitHub is fine - that's what it's for, and the best way to ensure I don't forget about it :p
Sour wrote:
Thanks for taking the time to try it out.
For the application folder, would /home/.mesen be appropriate? Or is it common to use the .config folder and multiple applications do so?
I've seen both.
Code:
pino@pinox61:~/develop/parallax$ ls -ld ~/.fceux
drwx------ 8 pino pino 4096 Nov 20 2015 /home/pino/.fceux
pino@pinox61:~/develop/parallax$ ls -l ~/.config
total 88
drwxr-xr-x 2 pino pino 4096 Aug 11 21:13 autostart
drwxr-xr-x 2 pino pino 4096 Sep 5 19:27 dconf
drwx------ 2 pino pino 4096 Dec 19 2016 enchant
drwxr-xr-x 2 pino pino 4096 Nov 19 2015 galculator
drwx------ 2 pino pino 4096 Sep 5 19:26 gtk-2.0
drwx------ 2 pino pino 4096 Nov 17 2016 gtk-3.0
drwx------ 6 pino pino 4096 Dec 19 2016 hexchat
drwxr-x--x 7 pino pino 4096 Aug 14 18:17 inkscape
drwxr-xr-x 3 pino pino 4096 Aug 17 22:23 libreoffice
drwxr-xr-x 3 pino pino 4096 Aug 16 17:23 menus
-rw-r--r-- 1 pino pino 962 Sep 5 20:10 mimeapps.list
drwx------ 2 pino pino 4096 Jul 14 2016 Mousepad
drwx------ 2 pino pino 4096 Nov 19 2015 pulse
drwx------ 2 pino pino 4096 Aug 18 13:28 ristretto
drwxr-xr-x 2 pino pino 4096 Aug 12 19:37 sqlitebrowser
drwx------ 2 pino pino 4096 Nov 19 2015 Thunar
-rw------- 1 pino pino 626 Jun 5 2016 user-dirs.dirs
-rw-r--r-- 1 pino pino 5 Nov 19 2015 user-dirs.locale
drwxr-xr-x 2 pino pino 4096 Sep 4 18:19 vlc
drwx------ 2 pino pino 4096 Aug 14 00:59 xarchiver
drwxr-xr-x 7 pino pino 4096 Aug 23 20:01 xfce4
drwx------ 2 pino pino 4096 Nov 20 2015 xfce4-session
Sour wrote:
as far as I know, message boxes in WinForms rely on the Win32 API, which always supports Ctrl-C as a way to copy the text?
Wine has the same problem. When I was trying to get 0CC-FamiTracker working, Ctrl+C in its message box didn't copy the crash message.
Sour wrote:
Putting feature requests on GitHub is fine - that's what it's for, and the best way to ensure I don't forget about it :p
I've added a nonworking homebrew test ROM (which relies on three unsupported controllers) and a crash in input configuration.
Sour wrote:
For the application folder, would /home/.mesen be appropriate? Or is it common to use the .config folder and multiple applications do so?
The modern "official"(????) way to to consult the
XDG_CONFIG_HOME environment variable (or $HOME/.config, or getpwuid(getuid))/.config), and use a directory under that. Other parts may instead belong .local/share (XDG_DATA_HOME), .cache (XDG_CACHE_HOME)
But ... there are still dozens of brand-new-programs that use the old convention of a single dotdir in your homedir, so don't sweat the complexity-for-the-sake-of-complexity that the FreeDesktop group seem to be overly fond of.
Quote:
Good point about --help - I can see if I can fix it, but I vaguely recall trying to implement that and failing because getting a .exe marked as a non-console application to display anything in the command line on Windows is actually hard or impossible (if I am remembering this right). And marking the executable as a console application has its own downsides as well.
Certainly the windows solution to that problem was to make an alert box containing the help text. Not the worst option...
tepples wrote:
mv ~/Downloads/Mesen.exe ~/.local/bin/
echo '#!/bin/sh' > ~/.local/bin/mesen
echo 'mono $HOME/.local/bin/Mesen.exe "$@"' >> ~/.local/bin/mesen
chmod +x ~/.local/bin/mesen
I just have debian:binfmt-support installed, and ln -s ((path to Mesen build binary) ((path to personal binary binary directory))
Has anyone suggested implementing Kazzo and CopyNES dumping integration into the Mesen debugger?
I'm on Debian 9 as well, and 181 MB for a NES emulator is just silly.
I'd probably be using it otherwise; Mednafen's debugger is a pain in the rear.
Sour: Have you considered a dedicated portable build?
Something that saves settings into the home folder by default, has no startup wizard, has no updater, no google drive integration, no built in database (so the GoogleDrive folder, MesenUpdater.exe & MesenDB.txt are not created/recreated or even included).
Something old school!
maseter wrote:
Something that saves settings into the home folder by default, has no startup wizard, has no updater, no google drive integration, no built in database
Those were the days!
lidnariq wrote:
Certainly the windows solution to that problem was to make an alert box containing the help text. Not the worst option...
Some tools definitely do this, Mesen at this point probably has too many command line options for them to fit though. I guess I could just list the main ones, and indicate where the user can find the rest of them in the UI.
B00daW wrote:
Has anyone suggested implementing Kazzo and CopyNES dumping integration into the Mesen debugger?
Nope - but since I do not own either device, that would probably be pretty hard to implement :\
Rahsennor wrote:
181 MB for a NES emulator is just silly.
And I could argue that worrying about 0.06$ worth of SSD disk space (or < 0.01$ worth of HDD space) is also equally as silly :)
Either way, it is what it is - the dependency on .NET/Mono is also very much the reason why there is a complete set of debugging tools in the first place. WinForms & its designer in VS trumps any other UI toolkit that I am aware of in terms development speed & simplicity - without it, making the debugger would have probably taken hundreds of hours more of my time, and I quite honestly would not have bothered making one at all.
maseter wrote:
Sour: Have you considered a dedicated portable build?
Most of this would not necessarily be very hard to implement, but it would require a fair amount of compile-time conditions - is there any reason why you would need this? Other than for the sake of it being old school, that is :p
You could get the whole thing under 5MB this way, save another 0.06$ worth of disk space!
No reason really, except that many people prefer portable stuff playable from flashdrives.
Could you also make it recognize disksys.rom (like fceux) and not just FdsBios.bin?
puNES (for example) turns portable, if you rename the .exe to puNES_p.exe!
And is video filter none the same as software?
maseter wrote:
No reason really, except that many people prefer portable stuff playable from flashdrives.
[...]
puNES (for example) turns portable, if you rename the .exe to puNES_p.exe!
This is exactly what the initial config dialog is there for. e.g if you put Mesen on a USB drive, run it and select "keep data in the same folder as Mesen", it will be portable. Previous versions before 0.9.2 used the same _P.exe shortcut as puNES, but it seemed like the vast majority of people thought that was too hard to find and not a very user-friendly way of handling it.
I'm not sure I see a benefit in making it recognize disksys.rom? The UI will ask for the BIOS a single time and create a copy of the file with the name it expects - you can technically give it a file with any name you want (what Mesen ends up doing with that file, e.g creating a copy called FdsBios.bin in its data folder, shouldn't really be the user's concern?)
What do you mean by "software"? All of the video filters are software filters - "None" just means no filter is applied at all (the PPU's output is converted to RGB and sent to the video card as-is)
To sidestep this selection dialogue, like when FdsBios.bin is present.
Because most bioses floating around on the net are indeed disksys.rom!
Compile-time conditions? How long does it take to compile Mesen?
"Compile-time conditions" refers to
conditional compilation, the means activated by
#if in C++ or
.if in 6502 assembly. The build system passes variables to the compiler or assembler, and the compiler or assembler uses those variables to determine which of two or more code paths to translate.
Ok thanks for clearing that up, IANAC, I am not a coder!
Sour: What does Libretro support exactly mean on your roadmap, Libretro core support, or a mesen core for Libretro?
Mesen probably compiles in under a couple of minutes on most computers, I would imagine (except very low end stuff).
Like tepples said, this was mostly about conditional compilation - I try to avoid this whenever I can because it complexifies the code, build process and also increases the odds of a specific combination of conditional flags resulting in a build that just doesn't work anymore (e.g because I did not test it).
Yes, the idea (eventually) is to make a Libretro core - this is not really a huge priority for me at the moment (esp. since I have never used Retroarch before) due to the amount of research it would imply. There's also the fact that a Mesen core may not run at full speed on some low-spec hardware (which would reduce its usefulness in many scenarios). It is listed in the roadmap because of the excessively large number of times I get asked the question "Will you make a libretro port?" :)
I'd much rather see other libretro cores work in Mesen!
How about including a public domain screensaver, instead of that black screen:
That or at least have some snow, like zsnes!
fireplace
https://forums.nesdev.com/viewtopic.php?f=22&t=13639nestify
http://forums.nesdev.com/viewtopic.php?f=22&t=8472I would also advise not using Y/Z as button inputs. As Y & Z are switched around on non US keyboards! Better map X&C for B&A.
Making Mesen into a libretro core host isn't going to happen - that'd essentially be the same as scrapping everything I've done and making something else altogether :p
The black screen is already used by the "Game Selection Screen" since 0.9.0 or so - the only time you'll see the black screen is on the first time you run Mesen, before loading any game.
On an unrelated note, I finally finished documenting (almost) everything there is to say about Mesen:
https://www.mesen.ca/docsIt took far too much time to write, but it goes over most options, tools, all the debugging tools and some more stuff - there's still a small number of things left to document, but it should be a decent start.
Sour wrote:
And I could argue that worrying about 0.06$ worth of SSD disk space (or < 0.01$ worth of HDD space) is also equally as silly
Who said anything about disk space? It's my ISP I'm worried about. Have pity on us poor farmers.
Sour wrote:
Rahsennor wrote:
181 MB for a NES emulator is just silly.
And I could argue that worrying about 0.06$ worth of SSD disk space (or < 0.01$ worth of HDD space) is also equally as silly
Verizon Wireless offers four
LTE Internet Installed plans:
- $60 for first 10 GB in a month, $10 per GB thereafter
- $90 for first 20 GB in a month, $10 per GB thereafter
- $120 for first 30 GB in a month, $10 per GB thereafter
- $150 for first 40 GB in a month, $10 per GB thereafter
Once a subscriber is in overage, a 181 MB download costs roughly $1.81. 100 MB here, 200 MB there, and soon it starts adding up to real money.
Exede offers three
Exede Satellite Internet plans:
- $50 for first 12 GB of 12 Mbps data outside 3 AM to 6 AM local time; 1 Mbps thereafter
- $75 for first 25 GB of 12 Mbps data outside 3 AM to 6 AM local time; 1 Mbps thereafter
- $100 for first 50 GB of 12 Mbps data outside 3 AM to 6 AM local time; 1 Mbps thereafter
Once a subscriber is in overage, a 181 MB download can take half an hour.
To put it another way: There's a reason the "computer" furniture in
Animal Crossing: Wild World for Nintendo DS and
Animal Crossing: City Folk for Wii makes dial-up noises, despite the games offering Wi-Fi multiplayer from release through May 2014.
I fully understand that in some very specific conditions 180mb may be a lot - and in those specific conditions, you also have the same issue with a very large amount of software in general.
Like I said, it is what it is - the Mono dependency is there to stay, I do not have the interest (nor the time nor energy) to rewrite 1+ megabyte of UI code (which would literally take months of my time at this point). I try to make Mesen as accessible and user friendly as I possibly can - but I am not getting paid for this, and have already sunk a few thousand hours of my free time coding this. You cannot expect a single developer making an emulator for fun to cater to the needs of every single individual on the planet - if you cannot justify the download size for Mono on Linux, then all I can really say is "sorry, use something that matches your needs better".
Sour wrote:
I fully understand that in some very specific conditions 180mb may be a lot - and in those specific conditions, you also have the same issue with a very large amount of software in general.
Like I said, it is what it is - the Mono dependency is there to stay, I do not have the interest (nor the time nor energy) to rewrite 1+ megabyte of UI code (which would literally take months of my time at this point). I try to make Mesen as accessible and user friendly as I possibly can - but I am not getting paid for this, and have already sunk a few thousand hours of my free time coding this. You cannot expect a single developer making an emulator for fun to cater to the needs of every single individual on the planet - if you cannot justify the download size for Mono on Linux, then all I can really say is "sorry, use something that matches your needs better".
Just wanted to chime in and say thanks for what you're doing, and I agree with you. You've done great work for the community with this project. If the download size is too big for someone, they can use another option. It's not your job to worry about all those edge cases.
I see there's hd audio replacement. Is that a new feature? Are there games using this? I searched google for something similar recently and couldn't find anything so I thought it wasn't possible.
gauauu wrote:
Just wanted to chime in and say thanks for what you're doing, and I agree with you. You've done great work for the community with this project. If the download size is too big for someone, they can use another option. It's not your job to worry about all those edge cases.
Thanks!
nesrocks wrote:
I see there's hd audio replacement. Is that a new feature? Are there games using this? I searched google for something similar recently and couldn't find anything so I thought it wasn't possible.
This is new in 0.9.2 - as far as I know, nobody has used it yet. The documentation for it isn't quite done yet, but there's some info in this
thread
I'm thinking if it is possible to use a combination of LUA with HD audio replacement to replicate music replacement function in HDNes. To do this I need to trap writes changing the background music and use LUA to write to the replacement registers instead. There is a "addMemoryCallback" in the LUA API, does it also block the write or the write still happens? And how does the callback function get the address and value of the write? For me, writing LUA script is easier than changing the ROM itself. I'm not using it at the moment but it would be nice to know that it is an option.
BTW can you add a link to the doc from your main page?
Thanks.
I just released 0.9.3 which fixes a few bugs here and there, and adds customizable shortcut keys for all of the main window's shortcuts (in the preferences), which should take care of the use case tepples had.
It should also help alleviate the issues where the debugger's key bindings were making it hard to work with at times (e.g because F5 to resume execution could end up loading a save state instead) - now you can remap the save/load state bindings to other shortcuts which do not conflict with the debugger's.
I also changed the default folder on Linux to ~/.config/mesen, but ~/Mesen will continue to be used if it exists (e.g only changes new installations) - you can move your ~/Mesen folder to ~/.config/mesen if you prefer that.
@mkwong98 I also updated the Lua API a bit in 0.9.3 to improve addMemoryCallback - any integer value returned from the callback function in Lua will now alter the read/write's result. I also updated the documentation with an example of this (+ how to get the address/value being read/written).
Now, as for using this to replace sound, I wouldn't recommend it - this would imply having both the debugger & hd packs running at once, which on a lot of slower computers will result in pretty bad performance (e.g potentially below 60 fps). The best option here would probably for me to add support for HDNes' way of replacing audio, but without making any UI for it - I haven't really looked at how HDNes does it, so I'm unsure how easy this would be to add, though.
I also added a link to the documentation on the website + in the emulator itself.
This emulator is tops...
Couple things I've noticed before and after the roll-out.
- Holding joystick/controller inputs seems to affect keyboard inputs, like holding shift-f* to make a savestate.
- I'll check now, but there have been some instances of mesen/debugger crashes with editing RAM in the memory viewer. i'll report after more extensive usage in 0.9.3.
- Something that seems a bit out of the ordinary is that watch locations do not update in the debugger in real time like other emulators. Watch is nice for seeing addresses in realtime, grouped in a manner the user wishes, without having to scroll up and down through the memory viewer. I know there is logging too, but realtime is good for active gameplay/analysis.
Thank you for all your talented efforts!
B00daW wrote:
- I'll check now, but there have been some instances of mesen/debugger crashes with editing RAM in the memory viewer.
I had it crash on me several times when trying to use cheats. Apparently enabling any cheat crashes the emulator for me.
B00daW wrote:
- Holding joystick/controller inputs seems to affect keyboard inputs, like holding shift-f* to make a savestate.
- I'll check now, but there have been some instances of mesen/debugger crashes with editing RAM in the memory viewer. i'll report after more extensive usage in 0.9.3.
- Something that seems a bit out of the ordinary is that watch locations do not update in the debugger in real time like other emulators. Watch is nice for seeing addresses in realtime, grouped in a manner the user wishes, without having to scroll up and down through the memory viewer. I know there is logging too, but realtime is good for active gameplay/analysis.
I thought I had mostly taken care of regressions between the new shortcut system and the old one, but there are some cases where it requires you to hit exactly the shortcut and nothing else, if there are potential conflicts with other shortcuts (this logic might not be perfect atm)
The watch has any option to auto-refresh while running Options->Refresh watch while running - or is that option not doing what you are expecting?
Let me know if you find a way to reproduce any crashes, happy to fix them.
tokumaru wrote:
I had it crash on me several times when trying to use cheats.
Are you talking about 0.9.2 or 0.9.3? 0.9.2 had a bug where opening the cheats window caused crashes pretty often. This is fixed in 0.9.3 - I just tried using cheats again to make sure, and it seems to be working normally on my end
Here's how I'd handle input containing overlapping shortcuts:
There are two kinds of keys: modifier keys and letter keys. Modifier keys are Ctrl, Alt/Option, Shift, Super/⊞/⌘, and joystick buttons. Letter keys are all other keyboard keys, the mouse buttons, and directions on the mouse wheel. In a shortcut, the user would be expected to press modifier keys before the last letter key, such as Ctrl+S not S+Ctrl, or Shift+click not click+Shift.
Shortcut evaluation would follow these steps:
- If a shortcut contains a letter key, and the most recently pressed key is a modifier key, do not activate the shortcut's action.
- If all keys for a shortcut are pressed, and the shortcut contains a letter key, and there is no shortcut that consists of the currently held keys plus additional letter keys, activate the shortcut's action on key-down.
- If all keys for a shortcut are pressed, there is no shortcut that consists of the currently held keys plus additional keys, activate the shortcut's action on key-down.
- Otherwise, save the shortcut's action and activate it on key-up if there are no intervening key-downs.
For example, if someone defines Ctrl+R and Ctrl+Shift+R:
- Ctrl down: Ctrl is modifier key; no shortcuts match
- Ctrl and R down: R is letter key; Matches subset of Ctrl+R and Ctrl+Shift+R; matches all of Ctrl+R; Ctrl+R activated
If anyone else can think of a set of shortcut bindings that might conflict, I'll run them through this logic.
Sour: Oh yeah I see the refreshing Watch list now. Great! However, if you have refresh watch list checked, you cannot remove a Watch by right-clicking while it's updating. It's greyed out and will not let you.
Also, is there a way to give an address in Watch a nickname? For example: [$17E] "Spawn Checkpoint" ?
Tried using a "Label" in the debugger. UI is a bit weird with it... Can only right-click and label something highlighted at times. Also after giving a label to something you can no longer search for the original address name; just in case you forgot your label.
The delete button being grayed out in the watch is a bug (because the auto-refresh feature is clearing out the list's selection on each refresh) - I'll fix it.
Right-clicking on an address to add a label seems to be working properly for me? Do you mean the "Edit Label" option is grayed out even though you right-clicked on an address?
You can also add labels by right-clicking the label list on the right and choosing "Add" - so in your example, if you set a label for InternalRam to $17E and call it Spawn_Checkpoint, then you can write [Spawn_Checkpoint] in the watch window instead of [$17E].
Good point about the search, not sure that there is any simple solution other than adding a "Disable all labels" toggle that would force the code window to ignore labels. On the other hand, you can sort the label list on the right by clicking on the columns, so it shouldn't be that hard to find the label matching a given address in most cases.
OK; cool... I tried the labels before, but the Watch didn't update the name once if was already labeled in the Watch section as the true address. It's a little awkward but as long as it works somehow.
Been noticing some strange things given the breakpoints when you right-click the breakpoint section in the debugger and add a range. For instance, I had a range of $5000-$5FFF (without adding the $ symbol of course.) Sometimes I got an "OK" button; other times it was greyed out. I had "execute" and "write" checked, and tried unchecking both; rechecking, etc. Only worked after I closed the window and tried again. Not sure what happens time to time... Seems intermittent.
Another thing is the Memory Editor. Worked on a game where I edited a RAM location and changed the nibbles in the byte. There was a specific routine that seemed to be on the very intimate to the main loop so it updated instantly on a nibble entered; and did not update to the desired effect. In that instance it needed a full byte value updated instantly. Would be good if the cursor jumped to and from bytes instead of nibbles and then waited for each character to be changed.
Love your work and I'm critical of the things I love... You're a gem.
It would be hard to update the watch expressions based on labels, given the complexity of the expressions that can be used - a simple find and replace wouldn't be reliable.
The breakpoint validation logic is pretty simple - if you had one of the types checked and your range made sense (e.g start < end), then the only thing left would be that you had a badly formed condition in the condition field. Next time it happens, take a screenshot of the window, would probably help figure out what is wrong.
Forcing whole bytes to be edited would be sort of annoying when you do only need to modify half of it, though (and keeping track of this state would require a fair bit of extra logic for the control) - I'd suggest just breaking the execution before making changes to memory if you need to do this.
Sour wrote:
tokumaru wrote:
I had it crash on me several times when trying to use cheats.
Are you talking about 0.9.2 or 0.9.3? 0.9.2 had a bug where opening the cheats window caused crashes pretty often. This is fixed in 0.9.3 - I just tried using cheats again to make sure, and it seems to be working normally on my end
It was 0.9.2, but I'm now using 0.9.3 and still getting errors when using the cheat finder. I got these after pressing the "reset" button:
Attachment:
mesen-cheats-1.png [ 21.44 KiB | Viewed 2988 times ]
Attachment:
mesen-cheats-2.png [ 16.42 KiB | Viewed 2988 times ]
Clicking "OK" just shows the same error window over and over.
Thanks for the report, was a small bug in the UI's code - it's fixed.
There is one thing that is bothering me a bit. I'm not sure if it's just on my PC, but when I pause the emulator, hold a button on the keyboard (X, which I configured to be the NES "A" button), and hit left control (which is my hotkey for "Run single frame"), the game doesn't advance a frame. In fact, it will only work if I'm not pressing any key while I press control, which kind of beats the purpose of pausing the emulator to test something specific in the game. Is this a setting or just how it is?
I don't know what the deal is with keyboard input interfering with other emulation controls, but doesn't the debugger have a virtual controller that you can use in situations like this?
nesrocks wrote:
In fact, it will only work if I'm not pressing any key while I press control, which kind of beats the purpose of pausing the emulator to test something specific in the game.
This is probably the same issue as the one mentioned a few posts back related to shortcuts - the whole shortcut handling changed drastically in 0.9.3, and there are still some minor issues with it that I need to fix. For now, try setting the "Run Single Frame" shortcut to use a key that is not used by any other shortcut - the behavior you're getting is most likely due to the Ctrl key being used in other shortcuts (e.g Ctrl-O)
And also, like tokumaru said, there is a virtual controller in the debugger as well as another unrelated "Execute single frame" option mapped to (I believe) F8, you could also try using that instead (although ideally both ways should work!).
In case someone is interested, AxlRocks used the HD pack support in Mesen to make 16-bit-like graphics for Megaman 1.
Personally, I think it looks pretty damn good.
Link:
https://www.romhacking.net/forum/index. ... ic=25426.0
Uh... are those graphics copied from the Genesis port of the game? I don't really see the point if this is the case.
From the post on romhacking.net:
Quote:
It's inspired by and takes a few graphics from Wily Wars, but with altered color palettes and most graphics are adaptations of the NES originals.
So it sounds like a lot/most of it was redrawn from scratch? Megaman's sprite is definitely not the same between both, for example.
I just released
0.9.4.
Highlights:
-Added support for 20 more NES/Famicom peripherals (controllers, keyboards, mice, barcode readers, external storage, etc.). All of these can now be properly recorded in movies, and used over netplay as well.
-60.0 fps mode to avoid dropping a frame every ~10 seconds
-Exclusive fullscreen mode support
-On Linux, it runs 25-30% faster than before when compiling on clang thanks to enabling LTO.
Debugger-wise, it adds a lot of small/medium features all over the place and fixes a good amount of bugs as well.
There's been some additions to the Lua API as well.
Wishlist or bug report: I was trying to set a breakpoint on palette writes, but trapping writes to PPU $3F00-$3FFF never triggered.
Ah, good catch - that's probably always been a problem since the palette writes don't count as being a write to VRAM (which is what triggers the breakpoints). Doesn't look like conditional breakpoints can help here, either.
You could most likely get it done with a Lua script, but that's a bit overkill for a relatively common use case. I'll most likely change the debugger so that read/writes to $3F00-$3FFF are trapped like other VRAM accesses.
It was pretty trivial to implement - $3F00-$3FFF read/writes (those done through $2007 read/writes) will now trigger PPU breakpoints properly.
If you grab the latest commit and build from that, it should work.
Hi Sour, I have 2 suggestions when using sprite conditions in HD Pack:
1. When the condition type is spriteNearby, automatically adjust the offset value according to the orientation of the sprite tile. For example in SMB I have a condition of -8 when 2 different sprite tiles are side by side:
<condition>face,spriteNearby,-8,0,58,FF162718
[face]<tile>1,55,FF162718,224,128,1,N
But this doesn't work when Mario is facing the other direction. It make more sense that when the sprite is flipped, the condition is flipped too. Otherwise I need 4 copies of the same condition.
2. Have 2 more parameters to the spriteNearby and may be spriteAtPosition to specify the orientation of the sprite.
Thank you.
mkwong98 wrote:
1. When the condition type is spriteNearby, automatically adjust the offset value according to the orientation of the sprite tile.
Yea, that would make a lot of sense and would help reduce conditionals, like you said. Shouldn't be too hard to implement either, I think.
mkwong98 wrote:
2. Have 2 more parameters to the spriteNearby and may be spriteAtPosition to specify the orientation of the sprite.
This is actually already available, but it looks like I forgot to document it completely. There are built-in conditions you can use for sprites: [hmirror], [vmirror] and [bgpriority]. They will only be "true" whenever that particular flag is set on the sprite.
hi, i have a bit of a problem with the controller setup.
I'm using a ds4 wireless controller and i've set up the "circle" button as the "A" button, the "X" as the "B" button, the "triangle" as the turbo A and the "square" as the turbo B and this is what happens when i press the buttons:
circle: A button is activated
X: A and B are activated
triangle: A and B are flickering (the flickering means they are in turbo mode right?)
square: the B button is activated (not in turbo mode)
but it gets weirder if i set the "X" button as the "A" button and i press it only the A button is activated instead of both buttons, in fact the preset controller settings has this in mind with:
square: B button is activated
X: A button is activated
circle: A button is in turbo mode
triangle: B button is in turbo mode
If i try to choose any different button combination i get this weird effect where both buttons are activated, or one is activated and the other is in turbo mode or vice versa.
Is there a way to fix this? i want to be able to play with the circle as the A button and the X as the B button
this emulator looks really good and it does have high accuracy and friendlu UI the the controller setup irks me!
So, please? help?
ps:Also is it possible to be able to use the d-pad and the joystick at the same time?
The controller key bindings setup window has 4 tabs - it's possible you have different key bindings in different tabs, which could cause pressing X to activate both A and B - if you selected multiple controller configurations in the setup dialog, you will have key bindings on multiple tabs. Clear them all out and re-do your bindings in the first tab and it should work properly.
You can use the D-Pad and the analog stick together by binding the D-Pad in one tab, and the Analog stick in another, which will allow you to use either of them to move.
Sour wrote:
The controller key bindings setup window has 4 tabs - it's possible you have different key bindings in different tabs, which could cause pressing X to activate both A and B - if you selected multiple controller configurations in the setup dialog, you will have key bindings on multiple tabs. Clear them all out and re-do your bindings in the first tab and it should work properly.
You can use the D-Pad and the analog stick together by binding the D-Pad in one tab, and the Analog stick in another, which will allow you to use either of them to move.
It worked! thank you very much!
Sour wrote:
mkwong98 wrote:
1. When the condition type is spriteNearby, automatically adjust the offset value according to the orientation of the sprite tile.
Yea, that would make a lot of sense and would help reduce conditionals, like you said. Shouldn't be too hard to implement either, I think.
mkwong98 wrote:
2. Have 2 more parameters to the spriteNearby and may be spriteAtPosition to specify the orientation of the sprite.
This is actually already available, but it looks like I forgot to document it completely. There are built-in conditions you can use for sprites: [hmirror], [vmirror] and [bgpriority]. They will only be "true" whenever that particular flag is set on the sprite.
Hi, I'm working on HD pack editor and will start working on sprites soon, can you give some examples of using these built-in conditions?
Also, I'm using conditions for the title screen of Double Dragon II to show graphics on blank background tiles but the frame rate on my PC is below 50FPS. Can you test to see if you can get full 60? I attached the test pack to this message.
Thanks.
Conditional rules were actually heavily optimized just a few days ago.
kya is working on another HD Pack for CV1 (different from the one he made on Nestopia last year) and abused conditionals and background graphics so much it would slow the execution to ~90fps on my computer for some portions of the game. After optimizing the code, the same HD Pack runs around 250fps. I just tried your pack on my end - it runs at 120fps w/ 0.9.4 vs 190fps as of the latest commit. 4x scale resolution does hurt performance a fair amount though, so it's normal for the pack to run relatively slow.
You can grab a build with the optimizations here:
downloadLet me know if/how much it helps - hopefully it gets you well above 60fps. I'll try to profile your HD pack when I get the chance to see if I can squeeze a bit more performance out of the HD pack code (but it's been optimized a lot already, so it's starting to get a bit hard to make it any faster)
mkwong98 wrote:
can you give some examples of using these built-in conditions?
They work like the other conditions, but will only have any impact if the tile is a sprite. They are automatically available in any hires.txt file, without having to define a <condition> tag for them:
Code:
[bgpriority]<tile>0,2304,FF271607,416,672,1,N
[hmirror]<tile>0,2304,FF271607,416,672,1,N
[vmirror]<tile>0,2304,FF271607,416,672,1,N
Also, I've added a few new condition types recently. All of these now exist:
-tileAtPosition/tileNearby
-spriteAtPosition/spriteNearby
-memoryCheck/memoryCheckConstant (to compare 2 addresses in memory or an address vs a constant)
-frameRange (to change tiles based on the frame number)
They are not documented anywhere yet, you can check
HdPackLoader.cpp to get an idea of how they work.
I haven't had the chance to take a look at changing sign of the offset values when mirroring is enabled (what you asked last time) - I'll try to get this done before the next release.
Thank you very much! The frame rate is back to 60 now. I have a few questions with sprites:
1. Can you add the negation of bgpriority, hmirror and vmirror conditions?
2. Is there a way to specify the relative orientation of the sprite tile in the spriteNearby condition? For example I have sprite tile "[" and I want to replace it with one HD tile when there is a "<" tile to the right or when the whole object is flipped, i.e." [<" and ">]". Then I want to replace it with another HD tile when there is the h mirror of the "<" to the right, ie "[>" and "<]".
3. I added 2 optional Y/N values to the end of the tile values in my editor to represent the h mirror and v mirror flags during copy and paste. Can you add this to the "Copy All Sprites(HD Pack Format)" in Sprite viewer?
Thanks.
Sour, I found a problem with HD tile for sprites, it seems the game is not showing pixels of the HD tile if those pixels are transparent in the original.
mkwong98 wrote:
1. Can you add the negation of bgpriority, hmirror and vmirror conditions?
I'll take a look - will probably just add the ability to negate any condition by prefixing it with a "!"
mkwong98 wrote:
2. Is there a way to specify the relative orientation of the sprite tile in the spriteNearby condition? For example I have sprite tile "[" and I want to replace it with one HD tile when there is a "<" tile to the right or when the whole object is flipped, i.e." [<" and ">]". Then I want to replace it with another HD tile when there is the h mirror of the "<" to the right, ie "[>" and "<]".
Isn't this the same as what you were saying previously? (e.g inverting the sign of the x/y positioning if the sprite is mirrored in that direction?)
mkwong98 wrote:
3. I added 2 optional Y/N values to the end of the tile values in my editor to represent the h mirror and v mirror flags during copy and paste. Can you add this to the "Copy All Sprites(HD Pack Format)" in Sprite viewer?
OK, should be easy.
mkwong98 wrote:
I found a problem with HD tile for sprites, it seems the game is not showing pixels of the HD tile if those pixels are transparent in the original.
This was working in 0.9.4 but doesn't work in the latest build? It's most likely caused by a bug fix that was suggested by kya - I included it, but hadn't really tested it yet. I will need to ask him why he needed this fix in the first place to see if there are workarounds for his case, and revert the change.
Sour wrote:
Isn't this the same as what you were saying previously? (e.g inverting the sign of the x/y positioning if the sprite is mirrored in that direction?)
No, I'm asking for some way to compare the h/v flip of the two tiles in the spriteNearby condition, not just the relative x/y position.
Sour wrote:
This was working in 0.9.4 but doesn't work in the latest build? It's most likely caused by a bug fix that was suggested by kya - I included it, but hadn't really tested it yet. I will need to ask him why he needed this fix in the first place to see if there are workarounds for his case, and revert the change.
Now I have tested it with other games, it seems to be the problem with bg priority sprites only.
Thank you for your hard work!
mkwong98 wrote:
No, I'm asking for some way to compare the h/v flip of the two tiles in the spriteNearby condition, not just the relative x/y position
So basically you want to be able to apple the spriteNearby/spriteAtPosition conditions only when the target sprite (e.g the one at the specified location) is mirrored horizontally or vertically? I'll try to figure out the best way to get this done (in terms of hires.txt specs).
For the rest, it should all be taken care of:
-The bg priority sprite bug should be fixed (but the behavior is still different compared to what it was before - it should allow your use case & kya's use case to both work properly).
-All conditions can be prefixed by ! to invert their meaning. e.g: [!hmirror]<tile>[...] This works for user-defined conditions, too
-I added Y/N flags for mirroring to the sprite copy feature
Thank you very much! Can you give me a download link?
Now that I'm finishing up with the project that has consumed the last few years of my life, I've been getting used to some new things and trying out Mesen more. I really like it a lot, and I might even jump ship from FCEUX for my daily use. The main noticeable practical difference for me at this point is just that it seems to use about 3x the CPU power as FCEUX in my measurements, but that's still a pretty acceptable level of performance. (I'm a heavy laptop user, though, so battery consumption and fan noise make CPU usage a bigger deal for me than most people, probably.)
One request:
The Famicom input options don't let you replace the standard controller. Is there some reason this is restricted? The AV famicom had detachable controllers with the standard NES plugs, and there are some potentially useful combinations of that with the expansion port (e.g. Family BASIC keyboard + SNES mouse).
Conversely the NES does have an expansion port that is largely compatible with Famicom peripherals with some trivial rewiring. Is there a need to disable the expansion port option for the NES setting?
Second question:
Is there an easy way to download the documentation for offline use? (Similar reason about the laptop above, I'm often offline.)
mkwong98 wrote:
Can you give me a download link?
I'll try and make a build tonight.
rainwarrior wrote:
seems to use about 3x the CPU power as FCEUX
Is that with the New or Old PPU? With Mesen's debugger opened or closed?
3x seems like a lot - in my experience, FCEUX's New PPU seems to run roughly at the same speed as Mesen (e.g at 100% CPU usage I get about 350 FPS in Zelda 2), but maybe there is something about my computer that is making FCEUX slower than it should be.
To (mostly) remove the restriction on P1/P2, you can turn on "Use HVC/NES-101 behavior" in Emulation->Advanced (this allows the Mouse+Keyboard combos for example). There is no way to turn on the expansion port for NES though. The UI is mostly built this way for the sake of reducing the number of options for most users (NES users) and having proper open-bus behavior (i.e you can use tepples' allpads test and have it display every type of NES/Famicom by changing the settings) - Code-wise there is no huge reason for the restriction, though, I could probably lift them when "Developer Mode" is turned on or something akin.
For the documentation, there's no way at the moment - I'll have to take a look. (Just copying the files to a disk doesn't work properly due to absolute paths being used in the HTML output)
Edit: Btw, are you using 0.9.4 or the latest build I posted in the debugger feedback thread? Or did you build it yourself?
I've just discovered the ability to rewind/fast-forward, and the debugger. Just wow...
UndoDB here in Cambridge has built a whole company around the reversible debugging story, so it's not a feature I'd expect to see in a NES emulator*. But I'm loving it already! I've had quite a few bugs in the past that were tricky to reproduce and this will help a lot. Fast-forwarding/re-winding is also a nice way to look closer at movements and animations.
One thing that annoys me though: I haven't figured out a way to quickly reload the last ROM. So at the moment I go to recent files, and I then have to CTRL+R to make it start. Would be great if there was a quicker way to do this with a single button hotkey, that could be mapped to my joypad as well. (maybe there already is and I'm just being blind here)
* And yes, I know the NES being more memory-constrained can make the implementation simpler than a reversible gdb on x86. But it's still no less amazing to see it in action.
Sour wrote:
rainwarrior wrote:
seems to use about 3x the CPU power as FCEUX
Is that with the New or Old PPU? With Mesen's debugger opened or closed?
Old PPU New PPU. No debuggers open, just started up in both cases.
Edit: was wrong about this.Yeah, I also gauged it roughly by using the fast forward features. With the limit taken off on Mesen, on the same ROM I was seeing a little under 400 fps on Mesen in fast forward, and about 1300 fps on FCEUX, pretty consistently.
On a more coarse level, measuring the same game running in attract mode on both, which probably more fairly takes into account rendering and everything else, the CPU usage is around 2.2% for FCEUX, and on Mesen it varies more but averages around 4%, so it seems to be a bit closer in more general usage (though fast forward is one of those 'constant use' debugging features for me). CPU usage with the debugger open seems significantly higher, though (~6% in Mesen, whereas in FCEUX it doesn't significantly affect performance).
Still pretty much double for me in that more fair condition,
but it's still very reasonable CPU load. Probably not enough of a difference to keep me using FCEUX, I only really mentioned it because it's one thing that does stick out to me a bit. (How often a program makes my laptop fan speed up is a real metric for me. ;P)
Aside from that the only glaring difference seems to be the TAS editor, but while I use the movie playback feature a lot in FCUEX, TAS editor I only used very seldom. I really love that you have an input override in the debugger itself; that's really cool.
Similarly there's a "text hooker" feature which is really good if you're trying to translate something, but not really of use otherwise. If I was doing translation work (which I am not, normally), I expect this feature would keep me bound to FCEUX.
Also, I like that you mention open bus behaviour, since that's one area where FCEUX has no configuration (just outputs 0 at any out of bounds memory access) and kinda sucks.
The anti-aliased font rendering in the debug disassembly sticks out to me as well, but only because I disable AA wherever I can, so any non-native rendering is really glaring. Not a big deal for the disassembly part because the font is big and legible, but the small labels lose legibility very quickly if I make the text any smaller than the default. The option to view PRG address shrunken and below makes this a problem... I do want to be able to see both that and the CPU address more or less constantly, but it's a lot of vertical space to give up, and shrinking it makes it too hard to see. (Putting it on the left might be nice, but that in itself takes a lot of horizontal space in this font... FCEUX did a nicely compact hybrid address with a 'bank' prefix but it's annoyingly hardcoded as 16k banks, which sucked.)
In the memory view, would be nice to be able to e.g. right click and jump to the same location in PRG ROM and vice versa, or jump to it in the debugger disassembly (or vice versa). Having options like this on a right click and the ability to jump around quickly between address spaces would help with getting around a lot.
Could really use PC in the debugger's "go to" menus. That's the place I most often need to recentre the debugger on.
The debugger has keys that conflict with other things, e.g. F5 to resume conflicts with savestate stuff, which is a bit of a hazard. Wondering why Esc for pause isn't also resume in the debugger? As well, none of the debug keys seem to be in the shortcuts configuration?
These are all pretty nitty picks, none of them are even close to dealbreaking.
rainwarrior wrote:
Edit: Btw, are you using 0.9.4 or the latest build I posted in the debugger feedback thread? Or did you build it yourself?
0.9.4 from the website.
Anyhow, still getting used to it, but I am really liking it. Thanks for making such a good debugging emulator!
Is there any way to customize the default set of debug register labels? I'm thankful that there's at least an address suffix on them but in general I find these make the disassembly harder to parse, visually.
I found the option to disable them, but I really love that you can put hover comments on the labels, so I actually like that part of them but... the names really get in the way for me. Would try to name them as just, e.g. "$2000" but apparently I can't start a label with a $ for some reason? (Still, even if renamed, they're just going to revert back to the default names the next ROM I open.)
Also the default names for the square registers both start with "Sq1" rather than being Sq0/1 or Sq1/2. (Famitracker users would probably be used to calling them pulse 1 and 2, but I'm sure many prefer 0 and 1 -- customizable default names would accommodate both.)
Going further down the wishlist hole, though, would be the ability to add new labels to the default set that are included automatically only if they match a specific mapper. You could document all the registers, have easy access to breakpoints, etc. all of that automatically without having to manually set up every time you open a ROM.
Re: debugging and CPU usage... it actually gets pretty bad for me if my laptop is not in full power mode.
Without the debugger open it's using about 50% of a core, and with any debug windows open it goes over 100% (with slowdown) and the audio becomes unlistenable (latency not a factor here, it's CPU bound). I could turn the audio off if I don't need it for debugging, but it's still maxing out the CPU core anyway. (I can debug with FCEUX with ~30% of a core at the same power settings.)
So... I guess for me the CPU usage does matter significantly when I'm on battery.
No problem at all when powered or on desktop though. I really like Mesen otherwise though, will probably end up using both, FCEUX for when on battery. (For comparison, the non-debugging fast forward FPS is like 90fps Mesen vs 350fps FCEUX on battery.)
This is not a request to make if faster, BTW, just trying to measure and describe the situation as it exists for me. Optimize only if it's important to you. (All of my suggestions above are just suggestions, Mesen seems pretty good to me with or without future improvement.)
mkwong98 wrote:
Can you give me a download link?
Here's a build with the HD Pack changes:
downloadBananmos wrote:
I've just discovered the ability to rewind/fast-forward, and the debugger. Just wow...
There's also a "Step Back" option that rolls by the execution by a single instruction, if you didn't notice it.
Bananmos wrote:
One thing that annoys me though: I haven't figured out a way to quickly reload the last ROM.
Power Cycle reloads the ROM from the disk - you can map this to any key/button combination. I'm not sure why you need to reset after opening from the recent files, though? Does it not work properly otherwise?
rainwarrior wrote:
[...]
Thanks for the feedback! Trying to respond to everything, but if I miss something important, let me know:
You probably should use the latest build (
download), it does have a large number of additions/improvements (especially in the debugger, e.g syntax highlighting, the new code scrollbar, code preview in the tooltips, event viewer tool, etc.) compared to 0.9.4. (if you have negative/positive feedback on the differences with 0.9.4, let me know)
I guess there must be something with my FCEUX configuration that is causing it to be abnormally slow.
Opening the debugger in Mesen does impact performance a lot - this is because it turns on a lot of stuff (trace logger, profiling, CDL logging, access counters, etc.). This is why you can open the trace logger and immediately see the execution's history (vs needing to start it beforehand in FCEUX, for example). It's a bit of a compromise between usability & performance. It might be possible to add option to disable some of these to improve performance a bit - I'll see if it makes a noticeable difference. I might also be able to move the processing for some of these to another thread, but Mesen already uses 2+ threads for the emulation itself, and a lot of the debugger runs in other threads, so arguably might not make too much of a difference on a dual core CPU.
TAS Editor/Text Hooker - Yea, these are more or less the last 2 tools that are missing. I do want to get to those eventually, but a TAS Editor, in particular, would probably take a lot of planning to get right. Fun fact: you can play .fm2 & .bk2 (bizhawk) movie files in Mesen (the UI doesn't tell you this, but it will accept the file if you select one). FCEUX movies tend to desync most of the time, though.
To go to the PC, you can double-click the top of the callstack or use "Show Next Statement" (Alt+*).
The shortcut keys for the debugger are a UI-side feature, while the rest of the shortcuts are a C++-side thing integrated with the emulation core. Ideally, I do want to add the ability to customize them, though. The keybindings at the moment are mostly meant to mimic's Visual Studio's default keybindings for C#.
The code window (and a lot of other things) all use GDI+ (this is what most .NET drawing routines use), maybe GDI+ doesn't respect Windows' own anti-aliasing settings? There is also a .NET function to draw strings with GDI instead of GDI+, but it is slightly slower according to the tests I've done, which is why I've stuck to GDI+ so far. Maybe there is a simple way to make fix the anti-aliasing, I'll check.
Memory Viewer navigation: Should be easy to add those.
PRG ROM display: I'm not sure what to suggest here - a bank number is vague in a lot of situations. Do you need this information to always be visible? Or would it being shown in a tooltip be sufficient? (e.g on mouse-over on the cpu address?)
Customizing the default labels: That might be nice, though it might be a pain to edit them in the UI. Maybe allow a simple XML file to be put in Mesen's debugger folder to configure the default workspace on a per-mapper basis? Labels can't start with $ because it would just be confusing when trying to parse what's under the mouse. Ideally, would need to allow the label to be blank (this is possible to define comments in the code) and allow the comment to show in the tooltip for an address that has a comment (it doesn't at the moment).
Performance: At the moment, Mesen is more or less optimized to the limits of my ability (I've spent dozens of hours profiling & optimizing the emulation core) - at this point, making the core any faster would probably require attempting to reduce cache misses, etc. (And I've already stopped myself from making some accuracy-related improvements because they would essentially yield no benefits, and drop performance by another 20-40%). The debugger side of things might still have a bit of room for improvements, though. What CPU does your laptop have? It would at least let me compare with the ~2012 laptop I have.
I'll take a look at the new build.
Sour wrote:
I guess there must be something with my FCEUX configuration that is causing it to be abnormally slow.
I have to apologize, I made a mistake before, I thought New PPU was on because I almost always have it there, but it got set to old at some point. Sorry! So, you can reduce the severity of the relative comparison a bit, but it's still true that Mesen is too slow when debugging on battery, for me.
With New PPU I get more like 650fps at full speed rather than 1300. When not fast forwarding, about 3% total CPU instead of 2.3%, vs. Mesen kinda jumping from 3-6% but maybe mostly a little over 4%.
When switching to battery mode, there's still enough overhead that I can debug in FCEUX with no slowdown unless I basically open all of its debug viewers at once and turn all their refresh frequencies up to full.
Sour wrote:
Opening the debugger in Mesen does impact performance a lot - this is because it turns on a lot of stuff (trace logger, profiling, CDL logging, access counters, etc.). This is why you can open the trace logger and immediately see the execution's history (vs needing to start it beforehand in FCEUX, for example).
Hmm, if you've already got an automated rewind capability, is there really a need to be proactively keeping traces as well? Couldn't you regenerate them on demand from the rewind?
I know you're using the CDL to aid the disassembly, but in my view it's only a little bit helpful, usually very easy to spot "nonsense" data code. If it's consuming significant CPU, I'd definitely turn it off if t was an option. (When starting to use the debugger, showing unidentified data was probably the first option I went looking for and turned on.)
Maybe part of why it's not that much of a problem to me in FCEUX is I can easily control the starting line of the disassembly view. If I see garbage at the top, I just move it up or down a line to find the alignment of the code. Your disassembly I see no easy way to move it one line. Mouse wheel only goes by 3 line increments. (And the desire here is 1-byte increments.) A clickable arrow at the top or bottom of the scroller might help.
Sour wrote:
I might also be able to move the processing for some of these to another thread...
If that's possible, it might help making it more usable on battery for me, but it depends whether the stuff that really requires single threading isn't already using the whole core though. (Like I was seeing ~17% in the CPU usage, which I interpret as one maxed thread (12.5%), and another ~40% of another thread on a second core? I think there's a far ways to go before that maxed thread would have any overhead.)
Sour wrote:
To go to the PC, you can double-click the top of the callstack or use "Show Next Statement" (Alt+*).
Thanks. The call stack is particularly cool. Since getting to it seems like the same function to me as what "go to" does, I didn't expect to find it in a different menu. I think it would still be worth having as another option in the go to list.
I notice Escape will now unpause the debugger, at least if used from the game window. That's an improvement, though I would much rather if the same key for pause and unpause worked in the debugger window too, similar to how I think having run as F5 be the same as load state as F5 is just asking for mistakes. It is NOT easy to tell which window currently has focus, especially since keys pressed in one often affect the other.
(Also, I am finding myself leaving FPS indicator on just so that I can have a pause indicator that doesn't darken and obscure the whole screen.)
Sour wrote:
The code window (and a lot of other things) all use GDI+ (this is what most .NET drawing routines use), maybe GDI+ doesn't respect Windows' own anti-aliasing settings?
Well, the code window looks to me like it's specially rendered, and not using native text stuff. I assume the other uses are the overlay on the game window, where it's totally innocuous to me, but when staring at a big block of disassembly it really sticks out like a sore thumb. With a well hinted font and no anti-aliasing, I'm used to being able to make the text smaller and still have very good legibility, so maybe I'm just a bit fussy about fonts like that. :S
If I could make it smaller and not anti-aliased, I could fit more text in the window and feel more comfortable reading it. As it is, it's OK in the default size, but I can't really make it smaller (mainly because it has 2 sizes of text in it, and the smaller one gets blurry really quick). Like if I could choose the font and specify its size numerically, that would be many times more useful than the increase/decrease size interface where I don't even know a precise number for the size.
Sour wrote:
PRG ROM display: I'm not sure what to suggest here - a bank number is vague in a lot of situations. Do you need this information to always be visible? Or would it being shown in a tooltip be sufficient? (e.g on mouse-over on the cpu address?)
The main problem is that the option is either add an extra half-line of small text (reduces amount of code I can see in the window, and hard to read) or replace the CPU address entirely (losing acces to the CPU address). What I was suggesting is that even having it on the left in-line with the CPU address at the same size would be better than either of those options.
Actually, since the CPU and PRG addresses are never going to differ in the bottom 3 nybbles you could even omit those... but a hover tooltip would also do the job. The main meed is just having easy access to the the PRG address, wthout having to give up 50% of the vertical for it, or have to go into the menu and toggle options every time i want to see it.
I am noticing now you have a nice bank display along the bottom, though! That's really good. That's a pretty decent way to see what bank I'm currently in.
Sour wrote:
Customizing the default labels: That might be nice, though it might be a pain to edit them in the UI. Maybe allow a simple XML file to be put in Mesen's debugger folder to configure the default workspace on a per-mapper basis?
Well, for me editing them in a text editor would be perfectly good.
Sour wrote:
Labels can't start with $ because it would just be confusing when trying to parse what's under the mouse. Ideally, would need to allow the label to be blank (this is possible to define comments in the code) and allow the comment to show in the tooltip for an address that has a comment (it doesn't at the moment).
Well, the blue highlight is nice, since it gives a visual indication that it's a known label and might have a hover tip. If I can't call it $2000 I'd probably just work around with like _2000 or something, not a big deal.
Anyhow, disabling them is the best option for me right now, and if I want custom names I can make them for a specific project, but I really like the tooltip documentation they provide; it's just that the names get in the way for me.
Sour wrote:
What CPU does your laptop have? It would at least let me compare with the ~2012 laptop I have.
Intel Core i7-4700MQ @ 2.40 GHz (4 core / 8 thread)
So hey, funny story - I was able to improve the performance when the debugger is opened by about ~60-70% ... by turning on the debugger while running PGO for the release builds (so essentially I did nothing, except help the compiler do its job better).
I could probably get another 10-15% more performance by disabling the profiler & access counters when the memory tools aren't opened (and probably a tiny bit more if I disabled the trace logger too), but I figure if this is enough for your laptop, maybe that extra 10-15% wouldn't be all that useful.
e.g: I went from ~180fps w/ debugger opened to 310fps in one case, and 140fps to 240fps in another.
Hopefully that helps the situation with your laptop a lot.
Also, you were absolutely correct about the font (I had never really noticed how blurry the text was.) Apparently I turned on antialiasing for text in that custom textbox all the way back in 2015 when I first created it. I switched it to use ClearType instead (which is what the font I'm using is meant to be used with), and the text is far clearer than before on my end. Zooming out also keeps it pretty readable. Let me know if you feel like choosing the font/exact size would still be useful despite this.
Build:
downloadrainwarrior wrote:
A clickable arrow at the top or bottom of the scroller might help.
That's actually something I completely forgot about when I made the new scrollbar, and is something I've actually found myself wanting to use a few times - I'll add one.
Being able to realign the disassembly isn't really possible unless I rebuild the disassembler from scratch, though, essentially. At the moment Mesen disassembles strictly based on the CDL data, and disassembles the entire CPU memory when pausing the debugger (it doesn't restrict itself to disassembling the viewport).
For the shortcut keys, the best suggestion I have at the moment would be to change the shortcuts for save/load states in the preferences so that they don't overlap with the debugger's hardcoded shortcuts. I'll try to get shortcut customization added to the debugger relatively soon, it shouldn't be that hard.
Pause screen: So I guess you'd want a "pause" screen that's similar to FCEUX? (e.g just a small pause icon)
For PRG Addresses: I can add an option to display it inline like there currently is for the byte code. And/or maybe a way to display the upper PRG bits only, like you said (not too sure what I would call this, though)
Sour wrote:
Apparently I turned on antialiasing for text in that custom textbox all the way back in 2015 when I first created it. I switched it to use ClearType instead (which is what the font I'm using is meant to be used with), and the text is far clearer than before on my end. Zooming out also keeps it pretty readable. Let me know if you feel like choosing the font/exact size would still be useful despite this.
I've actually made a point of modifying my local build to use Lucida Sans Typewriter instead of Consolas / DroidSansMono
Sour wrote:
So hey, funny story - I was able to improve the performance when the debugger is opened by about ~60-70% ... by turning on the debugger while running PGO for the release builds (so essentially I did nothing, except help the compiler do its job better).
I could probably get another 10-15% more performance by disabling the profiler & access counters when the memory tools aren't opened (and probably a tiny bit more if I disabled the trace logger too), but I figure if this is enough for your laptop, maybe that extra 10-15% wouldn't be all that useful.
Well, "good enough for rainwarrior's laptop not on full power" is only a good metric if you're rainwarrior
but this actually ALMOST gets up to full speed. The audio is still unlistenable whenever the FPS goes below 60, but I actually seem to be getting ~55fps here instead of ~35fps before. With the audio muted this is actually pretty tolerable debugging. (PAL games seem to do slightly better but are still dipping below 50fps... just not as much.) So yes, another few % would actually make a big difference for me specifically. Probably any significant speed improvement will broaden the number of users who can make use of it.
Sour wrote:
I switched it to use ClearType instead (which is what the font I'm using is meant to be used with), and the text is far clearer than before on my end.
Yes, that's a big improvement over the previous AA method, I guess by virtue of subpixel rendering, but ignoring the user's cleartype setting and forcing it on is a problem of its own (e.g. anyone with a rotated screen, or some unusual dot arrangement). I'm not
currently using a rotating screen, but I've know quite a few programmers who do it (and it's been me at some points). Control of AA becomes critical for that minority of users. (Though with at least the ability to increase font size, it can be coped with that way.)
Sour wrote:
Let me know if you feel like choosing the font/exact size would still be useful despite this.
There's really never a case where I think letting the user choose their font isn't useful, but I will say that what you had was already usable, and with cleartype it's better than it was.
Sour wrote:
Being able to realign the disassembly isn't really possible unless I rebuild the disassembler from scratch, though, essentially. At the moment Mesen disassembles strictly based on the CDL data, and disassembles the entire CPU memory when pausing the debugger (it doesn't restrict itself to disassembling the viewport).
Well, I only made the CDL suggestion because you had it on a list of constant-care things that might improve performance if optonal. I don't know how much extra CPU a CDL
actually takes.
I would say, though, that not being able to view disassembly for code that has not yet been run is a
big disadvantage, completely unrelated to any CPU the CDL might consume. When trying to step through stuff I definitely need to speculate about code that could be run but hasn't yet.
Sour wrote:
Pause screen: So I guess you'd want a "pause" screen that's similar to FCEUX? (e.g just a small pause icon)
Oh, yes a small icon would be better than using the FPS overlay to fake it. The full darkening overlay seems fine for just playing games but really obtrusive when debugging... but with it off any time i was on a screen without constant animation it was really confusing when debugging, especially with keyboard input not being global I had a hard time knowing whether I was accidentally in the debugger window or paused or both.
Sour wrote:
For PRG Addresses: I can add an option to display it inline like there currently is for the byte code. And/or maybe a way to display the upper PRG bits only, like you said (not too sure what I would call this, though)
Even just the PRG address tooltip idea would do the job adequately, IMO, though if you're looking for a name for an address missing the low 3 nybbles, I'd maybe call that "compact".
P.S. I just found the "event viewer" and it's amazing, though it seems to be missing a colour legend to know what's what. (Ah, I guess I can open the configure colours window and use that... though this seems to prevent any input in other windows while open.)
Weird thing I noticed: if I open the Input menu while paused in the debugger, the game will run for 1 frame before the input config window opens?
Sour wrote:
Here's a build with the HD Pack changes:
downloadThank you!
I'm interested to see how kya uses conditions in the new pack and what tools are used to make the pack.
Made some changes:
-Font & base font size can be configured for all tools (debugger, memory viewer, etc.). It lets you select any font, but only monospace fonts will display properly in most cases.
-Added up/down arrows to the scrollbar (but you still can't change the disassembler's alignment with this, though)
-Added a "Go to PC" option in both Go To menus
-Added an option to display the PRG address in a "compact" way next to the cpu address
-When the debugger is opened and execution is paused, a blue pause icon will show up at the top left of the screen (it can be disabled in the debugger). This icon will also be shown when the debugger is closed if the "disable pause screen" option is enabled.
-It's now possible to customize the default labels on a per mapper basis. I was about to create an .xml format for it, when I realized I already have my own format for labels :)
To configure labels for all mappers, create a Debugger\DefaultLabels.Global.mlb file in Mesen's home folder.
On top of this, you can also create files for specific mappers (both the global file & mapper-specific file will be imported, with the mapper-specific labels having priority). Name them like this: DefaultLabels.[mapper number].mlb
For FDS or NSF files: DefaultLabels.FDS.mlb, DefaultLabels.NSF.mlb
If anyone ends up making label sets for mappers/etc (or just improving the default labels I have at the moment), I'd be more than happy to include them in Mesen releases, too.
Edit: I didn't explain .mlb files at all - they're the import/export format for Mesen's labels (much like FCEUX's .nl files). So you can configure the labels inside the debugger's UI, export them to .mlb and save that file with one of the naming schemes above to override the default labels.
Build:
downloadOpening the input dialog causing a frame to run is "normal" - this is because it has to pause the emulator (due to multithreading) to update the list of connected gamepads, and pausing the emulator can only be done at the end of a frame (except when done from the debugger).
Nice to know it's running faster, pity it's still not up to 60+fps, though. I'll try and see if I can squeeze a bit more performance out of it.
mkwong98 wrote:
I'm interested to see how kya uses conditions in the new pack and what tools are used to make the pack.
He's made pretty heavy use of conditionals in his pack (has 700+ conditions defined), but I'm unsure how he built the pack. The definition file is over 5k lines long, so hopefully he didn't write all of it by hand. Maybe you can send him a PM on romhacking.net and see if he's willing to show it to you?
Sour wrote:
He's made pretty heavy use of conditionals in his pack (has 700+ conditions defined), but I'm unsure how he built the pack. The definition file is over 5k lines long, so hopefully he didn't write all of it by hand. Maybe you can send him a PM on romhacking.net and see if he's willing to show it to you?
He told me he is not using any tool.
Anyway, here is the object editor that I'm working on:
https://drive.google.com/open?id=11YFxzgX7krW-IDA5INaUkVypcnn6JzNlHow to use it:
1. Run the HD Pack Builder of Mesen.
2. Start a new project and locat the output of the builder.
3. Compose game objects by copying tiles from PPU viewer of Mesen or from the HD Images and the ROM viewer pages of this editor.
4. Add image files with replacement tiles to the pack.
5. Set the object to use the replacement tiles.
6. Add conditions if a tile appears at multiple locations.
7. Add palette swap if the same replacement tiles (with changes to brightness, hue and saturation) can be used for the new palette.
8. Choose "Generate pack data" when done.
How do you search for a hex sequence in the PRG ROM? It doesn't seem to be working for me on the lastest build
hackfresh wrote:
How do you search for a hex sequence in the PRG ROM? It doesn't seem to be working for me on the lastest build
Thanks - that's a new bug from a couple of weeks ago, it should be fixed in this build:
downloadmkwong98 wrote:
Anyway, here is the object editor that I'm working on
That's starting to look pretty nice!
A few issues I had:
-The color selection palette seems to be set to a transparent color by default? (I just get a blank gray form in the popup, but I can set colors by right-clicking on them)
-On the 3rd tab, in the textbox that gives the palette values (8 hex characters), pressing delete/backspace in this textbox causes the application to crash if there is no "project" loaded.
-libwinpthread-1.dll was missing from your zip (downloaded a version of it and it worked fine though)
-In the first tab when creating objects, I can't seem to configure conditions no matter what I try (both options in the right-click relating to conditions don't do anything for me)
It also crashes every time I try to start a project on a version of kya's pack that I have (it worked fine with axlrocks' megaman pack though) - but that might be because it is using some of the newer features (or it might have some leftover invalid tags in it, etc)
Quote:
There's also a "Step Back" option that rolls by the execution by a single instruction, if you didn't notice it.
Oh, but I certainly did, and that was part of my praise
Two small feature requests around this though:
1) I find myself using this feature a lot to rewind/fast-forward around a certain time point, to watch weird glitches/not-yet-great-looking animations in more detail. I think the slower rewind is just the implementation being more demanding on my computer, but I kind of like how it rewinds in slow-motion.
However, this creates quite an inconsistency with the fast-forward functionality which is a lot faster. Would it be possible to offer a "slow-mode" rewind & "fast"-forward, which would have their speed configured, so that these two could match? (at least on a computer where speed isn't an issue)
2) I also find that rewinding has a bit of a delay before it starts, when I rewind/fast-forward back&forth in succession. Don't know how hard this might be, but it would be nice if this delay could be eliminated. But again, it could just be my laptop being a bit sluggish...
Quote:
Power Cycle reloads the ROM from the disk - you can map this to any key/button combination. I'm not sure why you need to reset after opening from the recent files, though? Does it not work properly otherwise?
Indeed, I have to do reset for the ROM to actually start. So to reload the ROM from disk, I need to do CTRL+T and CRTL+R. It'd be nice if this was just one button press (that could be mapped to my joypad via the shortcut keys)
Bananmos wrote:
Would it be possible to offer a "slow-mode" rewind & "fast"-forward
There is a setting that configures the speed for the "fast forward" and "rewind" modes in the emulation settings' general tab, you can set them to any % value.
Quote:
I also find that rewinding has a bit of a delay before it starts
This is a compromise between speed, memory usage and code complexity. Mesen takes savestates every 30 frames for the rewind feature and compresses them to save memory - this takes about 1mb/minute of gameplay. This means the emulator has to re-execute ~30 frames or so before the rewind can start displaying anything. It would be possible to remove this delay by, for example, keeping in memory the last second's worth of audio/video and replaying that immediately while the core rewinds older data, but the rewinder's code is already fairly complex as it is, and I would rather not add any more complexity to it if at all possible. I'll try to take a look at the code and see if it is simple to implement, but I can't promise anything.
Quote:
Indeed, I have to do reset for the ROM to actually start.
Is this because of the ROM itself, or are you getting this behavior for any ROM? If you're getting this with all roms, this is not normal, and not something that should ever happen. If that's the case, are you getting this even when all debugger tools are closed, etc? I've never seen this particular issue on my end, so if you are able to figure out what conditions are required to cause the issue (e.g maybe some specific options you've enabled, etc.), that would be really helpful.
Sour wrote:
A few issues I had:
-The color selection palette seems to be set to a transparent color by default? (I just get a blank gray form in the popup, but I can set colors by right-clicking on them)
-On the 3rd tab, in the textbox that gives the palette values (8 hex characters), pressing delete/backspace in this textbox causes the application to crash if there is no "project" loaded.
-libwinpthread-1.dll was missing from your zip (downloaded a version of it and it worked fine though)
Should be fixed now. I uploaded a new version to Google drive, the link is the same.
Sour wrote:
-In the first tab when creating objects, I can't seem to configure conditions no matter what I try (both options in the right-click relating to conditions don't do anything for me)
I added a file into the archive explaining how to use the editor. Please take a look.
Sour wrote:
It also crashes every time I try to start a project on a version of kya's pack that I have (it worked fine with axlrocks' megaman pack though) - but that might be because it is using some of the newer features (or it might have some leftover invalid tags in it, etc)
Yes, the newer features are not handled yet.
Quote:
There is a setting that configures the speed for the "fast forward" and "rewind" modes in the emulation settings' general tab, you can set them to any % value.
Ah, thanks! Was looking in "Preferences", didn't realise there was an option under "Emulation".
Quote:
This is a compromise between speed, memory usage and code complexity. [...] I'll try to take a look at the code and see if it is simple to implement, but I can't promise anything.
Yeah, no worries if it doesn't get done. I'm finding the feature really useful already, just wanted to point out a possible improvement.
Quote:
Is this because of the ROM itself, or are you getting this behavior for any ROM? If you're getting this with all roms, this is not normal, and not something that should ever happen. If that's the case, are you getting this even when all debugger tools are closed, etc? I've never seen this particular issue on my end, so if you are able to figure out what conditions are required to cause the issue (e.g maybe some specific options you've enabled, etc.), that would be really helpful.
It indeed seems to only happen with my own game - other ROMs reset when power cycling.
But it also happens for the cut-down repro I sent you in a PM. How does that one behave on your end?
Bananmos wrote:
But it also happens for the cut-down repro I sent you in a PM. How does that one behave on your end?
It happens on my end, too (I had just assumed that rom was meant to show a green screen - never saw the actual graphics!).
As far as I can tell, your code does this:
-SEI
-Wait for vblank
-Write $80 to $4017, to set it to 5-step mode
-CLI
In Mesen's case this triggers an IRQ - the frame counter was running in 4-step mode long enough to set the IRQ flag, which was being inhibited by the interrupt disabled flag. So an IRQ ends up being triggered right after CLI is executed. Have you tested this on real hardware? If it works on a NES, the wiki's information about $4017 might be incomplete.
Currently it reads:
Quote:
Bit 6 -I-- ---- Interrupt inhibit flag. If set, the frame interrupt flag is cleared, otherwise it is unaffected.
[...]
The frame interrupt flag is connected to the CPU's IRQ line. It is set at a particular point in the 4-step sequence (see below) provided the interrupt inhibit flag in $4017 is clear, and can be cleared either by reading $4015 (which also returns its old status) or by setting the interrupt inhibit flag.
If your code works on a console, it would imply that setting the APU Frame Counter to 5-step mode is enough to clear the IRQ flag, in which case Mesen is wrong - having a test rom to validate this specific thing would be pretty nice.
mkwong98 wrote:
I uploaded a new version to Google drive, the link is the same.
Thanks, I haven't had the chance to take a look at it yet, though - I'll get back to you once I do.
Blargg's APU document seems to cover this scenario, if I'm understanding correctly. Search for "Frame Sequencer" (and don't miss the very last paragraph of the section). This doc says that 5-step (bit 7 of $4017 set to 1) never sets the interrupt flag, so I would assume the preceding
sei would keep it inhibited, and the subsequent
cli would not trigger an IRQ.
There's also
Brad Taylor's document (search for "Low frequency timer control"), which... well... is harder for me to understand (more hardware-oriented in some ways).
Someone more familiar with this (i.e. not me) should probably do some testing on actual hardware, though blargg AFAIK did exactly that.
Visual2A03 breaks down the frame sequencer using six states:
A - at binary 001000001100001 → 3728 clocks
B - at binary 011011000000011 → 7456 clocks
C - at binary 010110011010011 → 11185 clocks
D - at binary 000101000011111 → 14914 clocks - this one asserts IRQ and restarts the sequence if allowed by frm_/seqmode
E - at binary 111000110000101 → 18640 clocks
F - at 0
Unfortunately, these are implemented using an LFSR, so converting between these numbers and ordinary sequential ones is a pain. (I tentatively think it's the same 15-bit LFSR used for noise —x14 + x13 + 1) So F is the recovery/power-on state, guaranteeing that a 1 is shifted in when the state is all 0. (edit: assumed the above LFSR, did the work, added the offsets which of course agree with the time seven years ago when Quietust did the same work)
Frm_quarter is ultimately asserted when any of the six above states is found.
Frm_half is asserted when states B, E, or (D AND frm_/seqmode) are found.
E or (D AND frm_/seqmode) restarts the counter from $7FFF—so the very first power-on sequence probably differs from all subsequent operation?
Well, let's not jump to any conclusions. That init code of mine is very old and I can't even remember why it writes $80 rather than $C0.
I've also been testing this on a Powerpak/Everdrive, as I don't have a true MMC3 devcart setup handy to test with ATM (that would require a parallel-port and getting my old DOS-only EPROM emulator program). And the Powerpak/Everdrive probably simulate reset behavior rather than any true NES power-up state.
Anyway, changing it to write $C0 rather than $80 fixes my problem. So I'd assume the bug is my blunder and leave it at that. Thanks for pointing it out
Well, I guess licensed games don't ever do this (otherwise it would likely cause problems in Mesen), but it would be nice to know the exact behavior of the APU in this case (since it's clearly not being tested by any of the existing test roms). This should be pretty simple to test - I might try writing a test rom for this eventually (and make someone else check the result for me... :p)
As far as I can tell from Visual2A03, /RESET should have the same effect as a write to $4017, namely resetting the frame sequencer LFSR to the $7FFF state.
I'm not clear why they also included the frm_f node that recovers from the all-zeroes state; it's unnecessary given the above.
Maybe the original letterless revision of the CPU doesn't do anything on reset? (
checks) ... yup! The letterless 2A03
doesn't reset the LFSR to the $7FFF state on reset:
The problem doesn't so much seem to be about the behavior when resetting (one of blargg's tests validates this, iirc), but rather whether or not the IRQ signal remains enabled when switching from 4-step mode to 5-step mode (if it was already active), or if switching to 5-step mode acknowledges the IRQ signal and prevents an IRQ from being fired. The wiki implies $4015 must be read, or $4017 must be written to with bit 6 enabled to acknowledge an IRQ, but does writing $80 when the last written value was $00 (same as reset) also acknowledge it?
If switching to 5-step activates frm_/seqmode, then the question is, can the IRQ signal remain active while frm_/seqmode if it was previously activated? (I've unfortunately forgotten 99% of the little I ever knew about understanding transistors, so I have no idea if this is something that is easily checked or not). It should be pretty simple to test this in the visual 2a03/visual nes, though - might give that a try tomorrow.
Looking through Visual2A03, I see only the following can clear frame_irq:
- /reset
- frm_intmode, which is the latched copy of D6 on writes to $4017
- node 13170 = NOR(clk1out, r4015)
Sour wrote:
If switching to 5-step activates frm_/seqmode, then the question is, can the IRQ signal remain active while frm_/seqmode if it was previously activated?
so ... I'd say, correct, writes to $4017 with the bit 6 clear don't acknowledge the IRQ. frame_irq is acknowledged/disabled continuously while it's high.
Quick test on Visual NES with this code starting at $0000:
Code:
SEI
CLD
LDX #$FF
TXS
INX
STX $2000
STX $2001
loop:
LDA $2002
BPL loop
loop2:
LDA $2002
BPL loop2
LDA #$80
STA $4017
CLI
loop3:
jmp loop3
In hex:
Code:
78 D8 A2 FF 9A E8 8E 00 20 8E 01 20 AD 02 20 10
FB AD 02 20 10 FB A9 80 8D 17 40 58 4C 1C 00
And surprisingly, it does appear to clear the IRQ flag:
Code:
cycle: 0A7887 hpos: 1F vpos: F1 ab: 080A db: 0A cpu_a: 80 cpu_x: 00 cpu_y: 00 cpu_db: BF cpu_ab: 4017 io_rw: 00 io_ce: 01 cpu_frame_irq: 01
cycle: 0A7888 hpos: 1F vpos: F1 ab: 080A db: 0A cpu_a: 80 cpu_x: 00 cpu_y: 00 cpu_db: 80 cpu_ab: 4017 io_rw: 00 io_ce: 01 cpu_frame_irq: 00
As soon as it gets to scanline 241 (F1) and the write to $4017 occurs, cpu_frame_irq goes back to 0 (unless I'm looking at the wrong signal - the cpu_ prefix is Visual NES' way of fixing name conflicts between both cores). It looks like the effect is immediate, too, rather than being delayed by a few cycles like the sequence reset is.
Sour wrote:
As soon as it gets to scanline 241 (F1) and the write to $4017 occurs, cpu_frame_irq goes back to 0 (unless I'm looking at the wrong signal - the cpu_ prefix is Visual NES' way of fixing name conflicts between both cores). It looks like the effect is immediate, too, rather than being delayed by a few cycles like the sequence reset is.
While I can reproduce that behavior in Visual2A03 (
SEI / LDA #$80 / - manually set /frame_irq low then high -
/ STA $4017) I can't see which node is pulling frame_irq low. The cycle of the write, the S-R latch flips without obvious reason.
There are eight transistors that directly affect or are affected by frame_irq and /frame_irq -
t12655 and t12662 - frm_d AND frm_/seqmode can pull /frame_irq low
t12691 - the S/R latch, frame_irq pulls /frame_irq low
t12695 - /res pulls frame_irq low
t12739 - the S/R latch, /frame_irq pulls frame_irq low
t12783 - frm_intmode pulls frame_irq low
t12785 - ultimately allows reading the IRQ status from $4015
t12794 - reads from $4015, during φ1, pulls frame_irq low
t12838 - frame_irq pulls something low that pulls irq_internal high
So I literally don't understand how what the simulator says is happening happens.
I made a small NROM test - I ran it on Visual NES & it passes (screen output is half broken as usual, but the nametable was updated with the "pass" string). If someone could test this on a NROM board of sorts, that'd be great - otherwise a powerpak/everdrive would be ok, too (assuming I didn't screw something up in the test). Might be interesting to run on PAL & NTSC - although I'd imagine the results will be the same.
It's meant to output "1 FAIL" or "0 PASS". It writes pass if no IRQ occurs after writing $80 to $4017 when an IRQ was pending.
Mesen/Nestopia fail this at the moment, FCEUX/Bizhawk pass - so the behavior seems to vary between emulators.
The (terrible) asm6 source code is also included (thanks to tokumaru for the template, and rainwarrior for the CHR data!)
I've (lightly) modified your code to run on my mapper 218 cart and (heavily) modified to build under my ca65 toolchain. (That was silly, I really should have just ported the "prepare m218" function I've already written for xa65 and ca65 to asm6 for future inclusion instead)
Testing: I reliably see "1 FAIL" on my hardware. (2A03G)
Some code in your version seems to be writing #$E0 to $01:
LDA #$E0
STA $01
$01 is used by the irq handler to check if an IRQ did run or not, so that might be why it's always failing?
When I relinked and used variable names it ended up moving the counter to address 2 instead. (Word at 0 contains a pointer for uploading the mapper 218 CHR)
My rewrite does reliably "fail" in Nestopia, and "pass" in FCEU/FCEUX.
(I also specify mapper 7 w/ 1KB CHRRAM instead of mapper 218)
Ah, my bad. So either it varies from one model to another, or the Visual 2A03 isn't behaving properly in this case (which would be a pretty big coincidence, but you never know.)
Tangentially related, I found a silly bug in Mesen's mapper 218 implementation. If I specify "PPU A13 is connected to CIRAM A10", that's byte $A9 in the header. Mesen's loader fixes that to become byte $A8, which is then parsed as "PPU A12 is connected to CIRAM A10".
Honestly, using the 4-screen bit in the header to mean various 1-screen layouts is stupid and I think submappers should be used instead (and that goes for UNROM512 as well!), but ...
—
Sour wrote:
So either it varies from one model to another,
Er, do you have a hardware report that disagrees with mine?
Quote:
or the Visual 2A03 isn't behaving properly in this case
Visual2A03 is based on a decapped 2A03G, not one of the other revisions. So theoretically it should behave the same as the IC in my NES. That it doesn't—especially given that I don't see
why frame_irq is pulled low on the node that it does—strongly implies there's something weird going on with the simulator.
lidnariq wrote:
Tangentially related, I found a silly bug in Mesen's mapper 218 implementation.
Ah, yea, that's because Mesen implements the suggestion to have 4-screen + vertical = Screen A only mirroring in NES 2.0 files, which conflicts with the logic I used for 218. It's a simple fix, though. (But I agree this is pretty hacky at this point.)
lidnariq wrote:
Er, do you have a hardware report that disagrees with mine?
Nope, just (wrongly) assumed the Visual 2A03 wasn't the same model :) At this point, I'd be tempted to agree it's probably just an issue with the simulation (unless someone else gets a different result on their end)
Sometimes I get the "Copy Nametable" function glitch. May be the game is updating the nametable? Anyway to avoid this?
mkwong98 wrote:
Sometimes I get the "Copy Nametable" function glitch. May be the game is updating the nametable? Anyway to avoid this?
Thanks - this should be fixed (for all 3 viewers) in this build:
downloadSupport for color emphasis/grayscale bits in HD packs was added, too. Nothing has changed in the definition files, the HD frame just gets altered to match the emphasis/grayscale bits (this is only precise on a scanline level, though, based on the first pixel in the scanline)
It also adds 2 new conditions: ppuMemoryCheck/ppuMemoryCheckConstant which work like the memoryCheck/memoryCheckConstant variations, but use PPU addresses instead ($0000-$3FFF, where anything > $3F00 is the palette)
The build also contains 2-3 fixes for minor debugger UI issues.
I have two 8-bit Xmas ROMs (2012 and 2014), not sure if they're publicly released so I can't share them, but they seem to indicate a problem with the UNROM 512 mapper:
I'm noticing CHR bank switching and other glitches with these ROMs that don't occur in FCEUX. I suspect it's the register only covering $C000-FFFF in the variants of the board that have LED lights attached, since they use an additional register at $8000-BFFF (conveniently also required for self flashing, so generally enabled by the battery save flag).
Edit: doesn't actually seem to be a conflict in the $8000-BFFF range, I rescind this suggestion. There is a CHR banking problem when played in Mesen but I'm not sure what it is yet. Forcing XMAS 2012 to 8k CHR-RAM hides the problem, but XMAS 2014 relies on 32k CHR banking.
Edit: the bug is actually really weird, what is happening is that one of the CHR pages is being corrupted.
What appears to happen is that when CHR bank 00 is selected and the nametable at PPU $2XXX is written to, CHR-RAM at $6XXX is also written to at the same time, corrupting it. This actually triggers a breapoint for CHR write at that address range, too! I have no idea why this is specifically happening for the $6000 page, but that's what seems to be the problem. Somehow writing to $2XXX (at least the first 1K anyway, the game only uses the frst nametable) when CHR bank 0 is selected is also writing to CHR $6XXX.
Edit: Looked up
your implementation and the answer is apparent, you are explicitly mapping it there for the 4-screen mirroring case, except you're not really testing for 4-screen mirroring... you're testing for !_oneScreenMirroring which you've used to conflate 4-screen and 1-screen?
My ROMs are set as horizontal mirroring, incidentally (not sure if that's how the carts are wired, but they only ever write to the first nametable anyway), but I'm pretty sure this mapping should only occur when 4-screen mirroring is requested, not 1-screen, and not hardwired horizontal or vertical.
The implementation for UNROM-512 is almost completely untested - the only ROM I ever found for it was a demo for a game (forgot which). Everything else appears to not be readily available on the internet (which is well, good for the creators, and bad for me :p). And the single "test rom" I had found posted on the FCEUX bug tracker didn't appear to match the description on the wiki, either.
What does the log window say about the ROM? Mostly interested about whether it's listed as a NES 2.0 rom, and what mirroring it gives the ROM. For your theory to work, the ROM would either marked be as 4 screens (iNES), or marked as 4-screens + horizontal (if it's NES 2.0 - which was a proposal on how to mark a rom as 1-screen mirroring in NES 2.0). It looks like the log window would display "Four screens" in both scenarios, though.
Well here's your log window:
Code:
------------------------------------------------------
Loading rom: xmas_2014_unrom512.nes
File CRC32: 0x5D8AFC72
------------------------------------------------------
[DB] Initialized - 3100 games in DB
PRG+CHR CRC32: 0x5C1AA732
[iNes] NES 2.0 file: Yes
[iNes] Mapper: 30 Sub:0
[iNes] PRG ROM: 128 KB
[iNes] CHR ROM: 0 KB
[iNes] CHR RAM: 32 KB
[iNes] Work RAM: 0 KB
[iNes] Save RAM: 0 KB
[iNes] Mirroring: Horizontal
[iNes] Battery: Yes
[DB] Game not found in database
Sour wrote:
For your theory to work, the ROM would either marked be as 4 screens (iNES), or marked as 4-screens + horizontal (if it's NES 2.0 - which was a proposal on how to mark a rom as 1-screen mirroring in NES 2.0). It looks like the log window would display "Four screens" in both scenarios, though.
The 4-screen bit is clear, and the CHR RAM is set to 32k. With both of these conditions it triggers that "4-screen" mapping of PPU $2000 to CHR $6000 without having a 4 screen setting.
Your code's logic isn't checking for whether it's 4-screen or not to apply that mapping:
- Is it NES 2.0 ? Yes
- Is it one screen? No. (This test should be "is it 4 screen")
- Is there >= 32k of CHR RAM? Yes.
I think the problem in your intended implementation is in InitMapper above. If iNES 1 is used, it should interpret the "4-screen" flag as enabling the 1-screen register instead of hard-wired H/V mirroring. If iNES 2 is used, it should select between 1-screen or the 4-screen if the 4-screen flag is set, depending on H/V, right?
Except your code asks if(IsNes20()) and not if(!IsNes20()). At least, I think it probably works okay with that flipped... but I think it would be better to have the code more directly / semantically asked "Is it 4-screen?" rather than "Is it not 1-screen?"
Sour wrote:
mkwong98 wrote:
Sometimes I get the "Copy Nametable" function glitch. May be the game is updating the nametable? Anyway to avoid this?
Thanks - this should be fixed (for all 3 viewers) in this build:
downloadThis version crashed on me repeatedly, until I destroyed my ~/.config/mesen directory. At that point roms ran again, but I have the issue (someone mentioned before) where the debugger window is blank, and clicking anywhere in it gives the following error dialog.
That's most likely because the builds I post on the forums are usually "Windows-only" builds that don't contain the Linux code (my bad for not mentioning it). So you're most likely running the new UI code with the old C++ core library, which is probably causing the crashes.
Try this build:
downloadIt's the same as the one I posted in the other thread a couple of days ago, but contains the Linux build as well.
Awesome, I'll try it later tonight. Thanks!
I'm really loving this emulator lately! Its the only one that will run my game and reload the saves. :D
However, as I'm fiddling with my game, the bytes in the rom keep changing... and I can't re-load save states anymore. Even just changing a 1 to a 0 keeps me from re-loading. Bug-testing is rather difficult when I can only use SRAM saves.
Is there any way to disable this rom check when reloading a save state? Or is it possible to add an option to disable it?
If PC and stack have addresses on them, and those addresses have changed because you added or deleted something earlier in the ROM, how useful are your saved states?
tepples wrote:
If PC and stack have addresses on them, and those addresses have changed because you added or deleted something earlier in the ROM, how useful are your saved states?
That's why he mentioned just flipping a single bit. Often when you're editing levels, for example, the code and structure of the game stays exactly the same, but a tile or two changes. It's really helpful to have the ability to reload right to where you left off in testing after tweaking a byte of level data.
tepples wrote:
If PC and stack have addresses on them, and those addresses have changed because you added or deleted something earlier in the ROM, how useful are your saved states?
Actually I find this an extremely useful feature (or lack of feature, in this case) of FCEUX.
Very often I was making code or data changes that didn't move those critical things around, and could use the same savestate to quickly get back to the thing I was trying to test after making a change.
If savestates are normally taken just before NMI, the only code that needs to have a stable location is the main thread's wait loop, I suppose, but whatever the particular condition was, my program was meeting it, and because of that it was VERY helpful for testing.
Another thing related to savestates, I notice that it normally automatically takes a savestate of a game when I quite. However, if I double click on an NES ROM to open it, it will not reload that state? It seems the only way to get back to it is to open Mesen on its own, and then navigate its recent games UI? It seems if I double click to open a file (the normal way I open a ROM), I have already obliterated the previously saved state and have no way to get it back. Is there an option for this that I missed?
Data in ROM, such as the current level or metatile decompression pointers, can also move around from build to build even if you aren't modifying code. This renders pointers in RAM invalid.
One possible workaround is to construct the saved state within the program under test by specializing its level loading code. It's similar to how I'd add durable breakpoints to a program by sta $4444.
Jiggers wrote:
Or is it possible to add an option to disable it?
I can add an option for it - up until 0.9.3 or so there was no CRC check when saving/loading states, but I added a CRC to states when I added to option to manually load/save a state to any file. This allows Mesen to load the game that matches the CRC when you pick a state for another game.
3-4 people have asked for this in recent weeks - it's not hard to just disable the check (by adding an option), I'll try to get this done soon.
rainwarrior wrote:
It seems the only way to get back to it is to open Mesen on its own, and then navigate its recent games UI?
That's how it works at the moment, yes - it's hard to broaden the scope without potentially causing some use cases to become less intuitive (e.g someone might argue that they expect the game to start from the beginning when they open it via file associations). I could add an option to make it so Mesen always tries to load any existing savestate that was taken when the game was unloaded? (no matter how you try to load the game)
Or a command line option --load-newest-save-state that an IDE could be configured to invoke perhaps?
Sour wrote:
rainwarrior wrote:
It seems the only way to get back to it is to open Mesen on its own, and then navigate its recent games UI?
That's how it works at the moment, yes - it's hard to broaden the scope without potentially causing some use cases to become less intuitive (e.g someone might argue that they expect the game to start from the beginning when they open it via file associations). I could add an option to make it so Mesen always tries to load any existing savestate that was taken when the game was unloaded? (no matter how you try to load the game)
I was thinking it might be nice if there was a "last session" entry in the "Load State" menu, so whatever method was used to start the game, maybe that previous savestate would always be easy to get back to, even if you have already started playing.
I am a little bit confused, because there seems to be a slot 1-7 for saving, but there's a slot 8 for loading? Some of my games seem to have stuff in that slot, but I haven't figured out how it gets filled. At first I thought it might be what I was just asking for above, access to the thing that was saved when I quit, but I can't figure out where this savestate actually comes from.
When playing back an input movie to record an AVI, if I have the play icon enabled, it appears in the AVI (despite none of the other UI overlay stuff appearing, e.g. pause, FPS, frame counter). Input display also does this, apparently. Is there a rule for which stuff appears in the recording and which doesn't? Text overlay is omitted, but not graphical icons?
I notice it supports the DosBox ZMBV codec. ZMBV is good enough for my needs, though I might also recommend this one:
https://en.wikipedia.org/wiki/LagarithAlso, some emulators will pause emulation whenever you use the menu (FCEUX, Nestopia, Retroarch), others don't (Nintendulator, puNES). I can see why someone would want either behaviour, but for me I'm used to this being a pause action. I don't know if it's possible to automatically pause and unpause whenever the menu is open, but that would be a nice option for me, because I find I expect/want the pause a lot for this case, and it ends up being an extra step to pause first (or an extra several steps to back out of the menu, pause, then try again).
Advancing frame by frame seems to have the game completely muted. It would be useful if it would play a frame worth of audio, though. The immediate sound feedback in this case can be a very good indicator of state (especially if trying to debug music/sfx stuff), and is something I used quite a bit in FCEUX.
A "last session" option shouldn't be too hard to add. Slot 8 is the auto-save slot (saves every 5 mins by default) - I should probably make this clearer in the UI (and should probably expand the number of slots to ~10 now that UI shortcuts can be customized)
What records and what doesn't is a bit of a mess, I will admit - the text & counters are added on top outside the actual emulation core and aren't recorded. The icons and controller input are recorded and appear in screenshots (mostly as a side-effect of how the rendering "pipeline" works). Ideally I'd prefer if none of them were recorded at all, although arguably some people might want to record all of them.
The main reason Lagarith isn't included is its size - it's 400+kb worth of code, compared to ZMBV & Camstudio which, combined, fit within 500 lines or so. I try not to add large libraries unless there is really a large benefit to including them (e.g 7zip, lua, etc.).
The pause-while-in-menu is something that was requested a few months ago on Github but I haven't gotten to it yet. It used to be that way (when "Pause when in background" was enabled), but it didn't work as reliably as I wanted (and I think the Mono port didn't like it at all), so I scrapped the behavior at some point - I can take a look at getting it working properly again.
For sound while frame advancing, I'll have to take a look - at the moment, pausing empties the sound buffer, and since a single frame usually isn't enough to refill it past the threshold for playback to start (defaults to 50ms), no sound is played. It shouldn't be overly hard to fix it, though, I think.
Sour wrote:
The main reason Lagarith isn't included is its size - it's 400+kb worth of code, compared to ZMBV & Camstudio which, combined, fit within 500 lines or so. I try not to add large libraries unless there is really a large benefit to including them (e.g 7zip, lua, etc.).
Oh, ZMBV is fine. I was just suggesting Lagarith because it had been my default for a long while, but testing them both now, it seems that ZMBV is smaller and Lagarith is faster. I think I care about the file size more, so I should probably switch.
I guess the ideal thing would be to get the list of installed codecs from the user, but that's probably not easy/possible to do cross-platform.
After reading this
http://forums.nesdev.com/viewtopic.php?f=3&t=9873, I wonder if it is possible to add information in the HD pack to tell the emulator which tiles are used as sprite mask and only enables the sprite limit on those scan lines.
mkwong98 wrote:
After reading this
http://forums.nesdev.com/viewtopic.php?f=3&t=9873, I wonder if it is possible to add information in the HD pack to tell the emulator which tiles are used as sprite mask and only enables the sprite limit on those scan lines.
Mesen already has a global option for this: Emulation -> Advanced -> "Automatically re-enable sprite limit as needed to prevent graphical glitches when possible".
It will re-enable the sprite limit on a per-scanline basis based on certain heuristics - it works with the majority of games that abuse the sprite limit to hide sprites.
Here's another build with a few tweaks:
downloadChanges:
-Added an option to ignore the hash checks when loading a save state (Preferences->Save Data). This will only work with states that were saved with this build (because I kept a restriction that the save state must be for a game with the same mapper ID, which should be fine for homebrew, and prevent potential crashes)
-Increased the number of save state slots to 10, and clearly identified the auto save slot in the UI
-Added a "Load Last Session" option in the File menu (and a shortcut binding for it) that reloads the state from the last time the game was closed in the emulator (including data that is no longer shown in the game selection screen). This can be used any number of times and it will always reload the same state, until the game changes, or "Power Off" is used
-Added a couple more auto-pause options, one to pause when in menus & configuration dialogs, another to pause when in debugging tools. Having the main debugger window opened will override these, though (I might be able to fix that relatively easily, now that I think about it, though)
rainwarrior wrote:
I guess the ideal thing would be to get the list of installed codecs from the user, but that's probably not easy/possible to do cross-platform.
Yea, Linux support is essentially why I didn't use the Win32 AVI API for this, otherwise it would have been a lot simpler. To be fair, though, re-encoding to a better codec using ffmpeg is usually very simple (and with today's CPUs, very fast, too)
Sour wrote:
Here's another build with a few tweaks:
downloadChanges:
-Added an option to ignore the hash checks when loading a save state (Preferences->Save Data). This will only work with states that were saved with this build (because I kept a restriction that the save state must be for a game with the same mapper ID, which should be fine for homebrew, and prevent potential crashes)
Hooray! Thank you!
I wonder now... So my game project is actually a Final Fantasy 1 hack, upgraded to use the MMC5 mapper. Why does this emulator save/load SRAM saves, and FCEUX doesn't?
http://www.romhacking.net/forum/index.p ... #msg343988 - this is the post with the main mapper changes that would have affected how the saving works. There's a link to the project files in my second to last post in the thread. I'm not asking for help or anything, just supplying information about the changes made, if it helps answer the question of why Mesen works so much better...!
Sour wrote:
Mesen already has a global option for this: Emulation -> Advanced -> "Automatically re-enable sprite limit as needed to prevent graphical glitches when possible".
It will re-enable the sprite limit on a per-scanline basis based on certain heuristics - it works with the majority of games that abuse the sprite limit to hide sprites.
Nice!
Jiggers wrote:
Why does this emulator save/load SRAM saves, and FCEUX doesn't?
Are you bankswitching the save ram or only using 8kb of it? FCEUX might be defaulting to 8kb of sram for MMC5 (haven't checked), which could explain your problem. Mesen emulates 64kb sram for all MMC5 games (since that's what the wiki recommended).
Sour wrote:
Mesen emulates 64kb sram for all MMC5 games (since that's what the wiki recommended).
I think that 64k suggestion is maybe aimed toward a more limited emulation environment (e.g. flash cart?). It's just saying that for basic compatibility you could treat it as 64k always.
No games actually had 64k (though fingers crossed for NES SimCity), the reason for using the 64k superset is just because different banking bits were used on the 16k game boards than 32k boards.
If you want accurate emulation instead of mere compatibility you probably want to emulate the wiring and mirroring actual carts hard.
Sour wrote:
Jiggers wrote:
Why does this emulator save/load SRAM saves, and FCEUX doesn't?
Are you bankswitching the save ram or only using 8kb of it? FCEUX might be defaulting to 8kb of sram for MMC5 (haven't checked), which could explain your problem. Mesen emulates 64kb sram for all MMC5 games (since that's what the wiki recommended).
I figured it out! Disch suggested writing $04 to register $5113. That was making the save data appear in the 8000 range in the save file Mesen produced. FCEUX's save file ends before getting there. Solved it by writing $00 to the register instead! Now all the save data goes to the top of the file.
I noticed that Mesen is no longer disabling the menu when I load a game now, though. Before, I'd load up, and just start pressing keyboard buttons to do stuff in the game; now when I use the arrow keys, I start scrolling through the menu... while still playing the game. So I have to click on the game section of the window to stop that.
rainwarrior wrote:
If you want accurate emulation instead of mere compatibility you probably want to emulate the wiring and mirroring actual carts hard.
The NES 2.0 spec doesn't have any way of specifying the wiring, though, I think. I could hardcode the mirroring for 8kb/16kb/32kb/64kb and emulate the proper sizes based on the game DB or NES 2.0 headers, but I'd still have to fallback to emulating 64kb for everything else (or risk breaking rom hacks, etc.)
Jiggers wrote:
I noticed that Mesen is no longer disabling the menu when I load a game now, though.
Thanks, not quite sure what could have caused this - I'll take a look.
Jiggers wrote:
I noticed that Mesen is no longer disabling the menu when I load a game now, though.
This should be fixed in this build:
download
Just released
0.9.5, which contains a ton of debugger improvements/fixes (thanks for all the feedback/testing!).
It also adds new features to HD Packs and improves their performance in some scenarios.
Beyond that, there's been a few bug fixes and some new options, but nothing too major.
The documentation can also be downloaded for offline use starting with this version (
download link)
Just noticed that the new version is out. Thanks for all the cool stuff you added recently!
I need to do some more experimenting before I file a bug on GitHub, but I can't seem to use F9 to set a breakpoint anymore on Linux. I have to use the menu instead.
(But like Tokumaru said, we're thankful for your hard work that provides such a great tool for the community!)
Does the context menu properly list the toggle breakpoint item as having the F9 shortcut? If not, the key bindings might be wrong for some reason (you can customize them in Options->Configure shortcut keys). If it says F9 though and doesn't work, do other shortcuts work? What if you remap it to something other than F9?
Also, you can click on the left-most part of the margin to toggle a breakpoint for that line (probably a bit faster than using the menu)
tokumaru wrote:
Just noticed that the new version is out. Thanks for all the cool stuff you added recently!
You're welcome!
Sour wrote:
Does the context menu properly list the toggle breakpoint item as having the F9 shortcut? If not, the key bindings might be wrong for some reason (you can customize them in Options->Configure shortcut keys). If it says F9 though and doesn't work, do other shortcuts work? What if you remap it to something other than F9?
It listed F9, but F9 didn't work. (it did in the previous version though, so I don't think it's an issue of my window manager stealing the key). I changed to it something else and it worked fine. So it's definitely not a show-stopper for me, I'll leave it up to you whether it's worth investigating further
Quote:
Also, you can click on the left-most part of the margin to toggle a breakpoint for that line (probably a bit faster than using the menu)
Oh, huh. I guess I had been clicking the wrong spot, because I didn't think that worked. But it does. Yeah, that's faster.
Also, one other thing ... is there an option to disable the little tooltip popups that appear when I mouse over an instruction/opcode? Once one appears, my mouse wheel scrolling stops working until I click back in the code window, which make it painful to scroll around. I realize that's probably a weird fluke of mono's UI stuff, but if there's just a way to turn off those popups (I couldn't find an option in the menu to do it), I could work around it.
Either way, again, thanks!
gauauu wrote:
Also, one other thing ... is there an option to disable the little tooltip popups that appear when I mouse over an instruction/opcode?
You should be able to disable this in Options->Show->Show OP Code Info Tooltips. I tend to add options to disable any new feature that is potentially annoying, since some people might like the old behavior better :p
I'll try to reproduce the F9 issue in my Ubuntu VM, if I get the same problem it should be fairly straightforward to fix.
Sour wrote:
gauauu wrote:
Also, one other thing ... is there an option to disable the little tooltip popups that appear when I mouse over an instruction/opcode?
You should be able to disable this in Options->Show->Show OP Code Info Tooltips. I tend to add options to disable any new feature that is potentially annoying, since some people might like the old behavior better :p
Ah, that works for removing the opcode popup, but is there another option that prevents the popup for labels?
There's no option for it - but maybe it would make more sense to add an option to make it require a certain key to be held down + mouse over to see the popup? e.g hold down shift + mouse over to see the popup, instead of just mouse over. Otherwise you would have to toggle that setting each time you do want to check the popup.
Also, completely unrelated, but kya just released a HD Pack for Castlevania 1 that uses 2x normal resolution and replaces the sound track with ogg files. It looks and sounds pretty neat - although some people might argue it doesn't feel like a NES game at this point.
Link:
https://www.romhacking.net/forum/index.php?topic=26114
It seems like whenever I make significant changes to my rom, that the cheat window adds a new entry for it with the same name. I haven't tested to see if its making a new entry every single time I make any changes, or if it only makes a new entry if I open the cheat window after a change or what...
I'm also not sure if I'm using it correctly. But no matter what exporting or importing I do, I can't get old cheats to show up on a "new" game's entry. I have to manually add them in all over again. So my best guess is its doing the same thing save states were doing, where when loading cheats from a file, it makes a new game entry if the old game entry doesn't match? Perhaps the check for that could be merged with the save state check, if that's not too much trouble.
Anyway its not that big a deal at the moment, just a weird quirk I noticed!
Hi!
I want to discuss scripting in emulators (in Mesen, at first, I think it's Lua system is best now).
I know, why authors of emulators choose Lua, it's minimalistic and lightweight, but, in fact, Lua is a language for non-programmers. Game developers use it for very limited scripting inside gaming, but the real power of scripting need more powerful languages like Python (for example, script system in IDA or Maya). But there is heavy language, embedding of python can cost size, so no emulator supports it now.
Other problem with Lua - there is hard to find extension modules and many of them created in c, so script author must compile it, with the right version of Lua developer headers, with 5.1 vs 5.3 difference, C compiler formats, 32 bit vs 64 bit hell - Mesen and Fceux, for example, use different Lua versions. This is definitely not what the author of the script wants to do.
Another way to solve a problem - we can communicate with Python via Lua. I made proof-of-concept example of python server communicate to Fceux (it already has statically linked LuaSocket package with low-level socket wrappers for Lua). I must create simple RPC-system over json - it's only protocol, that I could find a small working implementation on pure Lua. It really VERY simple and not good for real projects.
Now I implemented almost all functions from fceua Lua API except small not significant ones, so I have interactive python server for control fceux via sockets. Callbacks from Lua to python implemented, pause/unpause emulator from python implemented - it use bad documented ability of fceux to update lua sockets when emulator paused via gui.register callback. I'll post demo video after a day or two.
So I think, if Sour will agree to include even just LuaSocket module, I can port Lua-python bridge for Mesen.
Even better, if someone could find some working protocol for serialization for lua - protobuf is good one and some realizations implemented as static compilers from shared .proto files to pure Lua files, so no additional binaries needed to package with emulator, but I not found any solutions, working without any fixes.
P.s. sorry for possible mistakes, my English is poor.
spiiin wrote:
Other problem with Lua - there is hard to find extension modules and many of them created in c, so script author must compile it, with the right version of Lua developer headers, with 5.1 vs 5.3 difference, C compiler formats, 32 bit vs 64 bit hell - Mesen and Fceux, for example, use different Lua versions. This is definitely not what the author of the script wants to do.
You might want to check out
Nintaco. It provides programmatic control using C, C#, Java, Lua and Python.
I wasn't even aware that Lua did not support sockets by default, to be perfectly honest. I can look into adding that.
As for Python, you pretty much guessed right - it's just way too big to embed directly into an emulator. I compared a few potential candidates (Lua, JS, Python) before settling on Lua due to its size/simplicity in terms of integration. Python was something like 30x bigger than Lua, if I recall correctly.
But realistically, except for very complex scripts built for a very specific purpose, I imagine Lua is good enough for the majority of use cases. Still, if you do make a Python bridge, I'm more than happy to add it to the built-in scripts I've started adding to the script window, since it could definitely be useful for other people too.
Jiggers wrote:
It seems like whenever I make significant changes to my rom, that the cheat window adds a new entry for it with the same name.
Cheats are stored based on the game's CRC32 (iirc, it only checks the CRC32 for the PRG ROM, so modifications to CHR ROM won't cause any issues). It's done this way mostly because typically if the PRG ROM doesn't match, cheats won't work (if you ignore homebrew dev or rom hacking). I could probably add an option to "match by rom name" if no CRC match is found (or like you said, use the save state option to toggle this behavior)
zeroone wrote:
You might want to check out
Nintaco. It provides programmatic control using C, C#, Java, Lua and Python.
Thanks, I'll check it and answer to thread about your emulator. I didn't know about it before.
Sour wrote:
Still, if you do make a Python bridge, I'm more than happy to add it to the built-in scripts I've started adding to the script window, since it could definitely be useful for other people too.
Currently, my text rpc-json protocol is only for tests, it's not fast or robust, it's not very useful. But I'll post sources, at least for example for others, it's better than nothing.
Alsoe, if you want some examples to bundle with the emulator, feel free to use and modification my scripts even without mention to me:
https://github.com/spiiin/CadEditor/blo ... rallax.luaShow lines when game doing horizontal scrolling changes
https://github.com/spiiin/CadEditor/blo ... Screen.luaSimple "shaders" post-effects for any game, maybe I'll add some other shaders later - it's a good start for make other shaders, but need some fixes to gui press keys notifications.
gif -
https://spiiin.dreamwidth.org/file/17583.gifand video (bad quality
) -
https://www.youtube.com/watch?v=04KOJmRYwko
Sour, please can you add the "frameRange" condition type to the documentation? Also can you clarify which integer values in the conditions need to be in hex format? And what does "disableContours" option do? The description appears to be incomplete. Thank you.
Hi Sour again,
It would be so helpful to me if you could add a check box on the Trace log file screen that would cause the registers and frame count to appear on the left when the file is generated. That would make them all alligned when using the indent with stack pointer and it would be more convenient to comment.
Thank you for reading my request.
edit: if you could just add a line like
<SwitchRegistersLeft>false</SwitchRegistersLeft> under the TraceLoggerOptions in your setting.xml file that would be just as helpful for me because I plan to never switch back if you make the change.
spiiin wrote:
Alsoe, if you want some examples to bundle with the emulator, feel free to use and modification my scripts
Thanks, I've added these to the dropdown. I haven't been able to sit down and take a look at enabling sockets yet, though (did some research a couple of weeks ago but got sidetracked and didn't get back to it yet).
mkwong98 wrote:
can you add the "frameRange" condition type to the documentation? Also can you clarify which integer values in the conditions need to be in hex format? And what does "disableContours" option do? The description appears to be incomplete.
frameRange isn't in there mostly because I couldn't find a suitable way to describe it with words :) It's something kya needed for his HD Pack, and essentially it defines a fraction of frames during which the condition is valid, iirc. So if you put 1 and 3 as the parameters, 1 frame out of every 3 frames, the condition is true.
The fields marked as "integer" as meant to be decimal, and the ones marked as "hex" are supposed to be hex. I messed this up with the memoryCheck* conditions though, which all say "integer" even though it should all be "hex" instead.
disableContours disables a background-color outline around sprites when a HD background is enabled (otherwise sprites that contain transparent pixels that rely on the background being a specific color did not show up properly).
It's a bit of a pain to update the documentation in-between releases without including stuff that's not in 0.9.5, I'll try to get these fixed when the next version is released.
unregistered wrote:
It would be so helpful to me if you could add a check box on the Trace log file screen that would cause the registers and frame count to appear on the left when the file is generated.
Hey, I read your PM, too - I understand you want to customize the trace log's output, but I'd prefer not to add too many options that just change how the information is displayed (rather than what is displayed) since the trace logger is already filled with options to control what's shown. I might be able to add some sort of "templating" that you can use to customize the output to be more or less whatever you want it to be (e.g by writing the format you want to use in a textbox), but I'm unsure how easy this would be considering the trace logger has to output 450+k lines per second to keep the emulation running at 60+fps.
Sour wrote:
frameRange isn't in there mostly because I couldn't find a suitable way to describe it with words
It's something kya needed for his HD Pack, and essentially it defines a fraction of frames during which the condition is valid, iirc. So if you put 1 and 3 as the parameters, 1 frame out of every 3 frames, the condition is true.
I see something like this in kya's pack:
Code:
<condition>Frame14PlusOf16,frameRange,10,e
<condition>Frame12PlusOf16,frameRange,10,c
<condition>Frame10PlusOf16,frameRange,10,a
<condition>Frame8PlusOf16,frameRange,10,8
<condition>Frame6PlusOf16,frameRange,10,6
<condition>Frame4PlusOf16,frameRange,10,4
<condition>Frame2PlusOf16,frameRange,10,2
It seems "frameRange,x,y" mean:
Code:
result = ((frame % x) >= y);
Is this correct?
Sour wrote:
disableContours disables a background-color outline around sprites when a HD background is enabled (otherwise sprites that contain transparent pixels that rely on the background being a specific color did not show up properly).
Sorry, still can't get me head around this, can you give an example?
The background tag has two optional values, what are the default values? 0.0 or 1.0?
Thanks.
Sour wrote:
unregistered wrote:
It would be so helpful to me if you could add a check box on the Trace log file screen that would cause the registers and frame count to appear on the left when the file is generated.
Hey, I read your PM, too - I understand you want to customize the trace log's output, but I'd prefer not to add too many options that just change how the information is displayed (rather than what is displayed) since the trace logger is already filled with options to control what's shown. I might be able to add some sort of "templating" that you can use to customize the output to be more or less whatever you want it to be (e.g by writing the format you want to use in a textbox), but I'm unsure how easy this would be considering the trace logger has to output 450+k lines per second to keep the emulation running at 60+fps.
I understand what you are saying. Well, if you do try and add "'templating'", that would be extreemly incredible, just don't worry about your emulator not running at 60fps during the trace log creation because running trace logs at 100% emulation speed isn't helpful, for me at least. Those files become way too big and using them is a huge chore. I usually build trace log files at 1% emulation speed.
edit: Hmm... I've never used a condition when making a trace log file; maybe utilizing condition(s) could warrant recording trace log files at 100% emulation speed. That's a really cool feature!
In Mesen's Trace Logger, after selecting "Text" display for the processor Status Flag
Format, it appears, to me, that your text is incorrect. It could be
nv-bdizc, instead of your
nvb-dizc, to make more sense, I think.
http://wiki.nesdev.com/w/index.php/CPU_status_flag_behavior (see lower
B flag section
)
edit.
mkwong98 wrote:
Code:
result = ((frame % x) >= y);
Is this correct?
That's how it's implemented in the code, yes.
mkwong98 wrote:
Sorry, still can't get me head around this, can you give an example?
The black outline around the sprites in this screenshot is caused by this:
Otherwise all the black on the sprites would end up showing the background instead - you can tell this solution isn't perfect if you look at the top right sprite's cape, the background's blue shows through it. If disableContours is turned on, the black disappears and the background will be shown for those pixels instead.
mkwong98 wrote:
The background tag has two optional values, what are the default values? 0.0 or 1.0?
0.0 so that the background doesn't scroll - "1.0" would mean the background scrolls at the same speed as the PPU's scrolling registers change.
unregistered wrote:
In Mesen's Trace Logger, after selecting "Text" display for the processor Status Flag Format, it appears, to me, that your text is incorrect. It could be nv-bdizc, instead of your nvb-dizc
It should probably always be "nv--dizc", actually, since the B flag does not exist on the CPU itself. Thanks - I'll fix it along with the other trace logger changes I am slowly working on.
Sour wrote:
unregistered wrote:
In Mesen's Trace Logger, after selecting "Text" display for the processor Status Flag Format, it appears, to me, that your text is incorrect. It could be nv-bdizc, instead of your nvb-dizc
It should probably always be "nv--dizc", actually, since the B flag does not exist on the CPU itself. Thanks - I'll fix it along with the other trace logger changes I am slowly working on.
That's a great idea!
Was unsure about suggesting "nv--dizc" earlier, but now that makes a lot of sense after carefully reading and being blessed with finally understanding this nesdev wiki line:
This is the only time and place where the B flag actually exists: not in the status register itself, but in bit 4 of the copy that is written to the stack.
"... along with the other trace logger changes I am... working on."
Sour, when you have some free time, could you make your blue and black pause logo somewhat transparent?
(It covers some white text from Lua Script and it would be very nice if I could read that text when Mesen is paused.)
Maybe think about adding a "Toggle Pause Logo" line under Shortcut Keys instead. Whatever takes the smallest amount of your time. I'm sorry for all of these requests.
Here's a build with the trace logger improvements:
downloadIt adds an (optional) formatting string that lets you customize how each row is constructed (field order, padding, hex vs decimal, etc.), a couple more formatting options and fixes some bugs with the CPU flag display (it also contains other debugger changes I've done to the code since 0.9.5, too)
If you setup the trace logger this way, you should get more or less what you were looking for (between your messages here & your PM):
Attachment:
traceloggerconfig.png [ 12.57 KiB | Viewed 2678 times ]
You can customize the output to your needs by changing the "Format" field - there's a tooltip that explains how it works on the right.
I also made the pause icon slightly transparent, I can't really make it any more transparent than this (else it becomes too hard to notice it) - there is already an option to turn off the icon completely (in the debugger's options menu), if you would rather not see it at all.
Sour wrote:
Here's a build with the trace logger improvements:
downloadIt adds an (optional) formatting string that lets you customize how each row is constructed (field order, padding, hex vs decimal, etc.), a couple more formatting options and fixes some bugs with the CPU flag display (it also contains other debugger changes I've done to the code since 0.9.5, too)
If you setup the trace logger this way, you should get more or less what you were looking for (between your messages here & your PM):
Attachment:
traceloggerconfig.png
You can customize the output to your needs by changing the "Format" field - there's a tooltip that explains how it works on the right.
Thank you so much Sour! For some reason Mesen doesn't like this
f[FrameCount][Align, 7]A:[A,h] X:[X,h] Y:[Y,h] S:[SP,h] [P,8] [PC,h] [ByteCode, 12h] [Disassembly][EffectiveAddress] [MemoryValue,h]. It shows "f5513Align, 7]A:A0..." always removing the first opening bracket of the Align tag. After editing my line a bit
I think that you may have made a typo in the notes about how the Align command is susposed to be used. Thank you so much Sour for the extreemly helpful emulator!!
unregistered wrote:
It shows "f5513Align, 7]A:A0..." always removing the first opening bracket of the Align tag.
That's because you added a space after the comma - I should probably tweak it to ignore spaces inside tags, but for now you should be able to fix it by removing the extra spaces (you have another one in the "ByteCode" tag, too)
Sour wrote:
unregistered wrote:
It shows "f5513Align, 7]A:A0..." always removing the first opening bracket of the Align tag.
That's because you added a space after the comma - I should probably tweak it to ignore spaces inside tags, but for now you should be able to fix it by removing the extra spaces (you have another one in the "ByteCode" tag, too)
Thank you so much! It works incredibly now!!
Last night, after discovering your new
Show zeropage addresses as 2 bytes checkbox, I checked it and the 00 was added to the low byte (i.e. $21 becomes $2100). I can't figure out how to add 00 to the high byte.
edit: Thank you so much for this added checkbox! It will be so helpful if I can get it to work correctly.
Now this line is working so good for me:
f[FrameCount][Align,7] [A,h] [X,h] [Y,h] S:[SP,h] [P,8] $[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h] (note: there is a space between [EffectiveAddress] and [MemoryValue,h])
because there isn't A: X: Y: (saves 6 characters) and you add a $ infront of each byte in ByteCode (3 extra characters) plus I added an extra space before $[PC,h] (1 extra character) so that is a savings of two characters per line (when compared to FCEUX) and it is easy for me to read!
edit2: ...guess that would be a savings of four characters per line because [P,8] doesn't have any extra characters preceeding it.
unregistered wrote:
I checked it and the 00 was added to the low byte (i.e. $21 becomes $2100). I can't figure out how to add 00 to the high byte.
Yea, that's my bad, didn't test it properly and added the extra 0s at the end instead of the beginning, I'll fix it and try to upload a new build when I get a chance.
^thanks
Try this Format Override line:
Code:
f[FrameCount][Align,7] [A,h][X,h][Y,h] S:[SP,h] [P,8] $[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h]
only edit:
or this one:
Code:
f[FrameCount][Align,8][A,h][X,h][Y,h] S:[SP,h] [P,8] $[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h]
edited again (sorry): Notes for others: this one will keep everything the same as the first Format Override line suggested in this post, except [FrameCount] will be able to reach 9,999,999 (without the line moving down 1 column). Note: Anything over 999,999 frames will cause no spacing between [FrameCount] and [A,h] but, it won't be difficult to read, I think, because [A,h] changes often and [FrameCount] only changes once at the beginning of each frame. I would just turn the wheel on my mouse a bit to scroll Notepad up and down to see where the start of [A,h] is, if I needed to.
And... Notepad, Programmer's Notepad, and, I guess, every other text editor
starts the cursor at the left margin on column 01; Mesen starts the column number at 00, but that's ok because you just need to add 1 to the number after
Align, so, in my example above, [A,h] always starts at column 09 in Notepad.
And... the "f" at the start (taken from FCEUX trace log files) allows me to easily find a frame number in Notepad.
in Notepad, click View>Status Bar to show the cursor's line and column location
This one is the same amount of characters... the small "r" for "registers" will make that hexidecimal group less confusing sometimes, for me at least, and it will be obvious where the registers start despite the FrameCount.
Code:
f[FrameCount][Align,8]r[A,h][X,h][Y,h] s[SP,h] [P,8] $[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h]
This
build should fix the issues you had. It fixes the bug with 2-byte display for zero page addresses and makes the parser for the format string more lenient w/ regards to spaces (and it should display "[Invalid tag]" in the output for most scenarios where it can't make sense of the tag)
THANK YOU, SOUR, VERY MUCH! Code:
f[FrameCount][Align,8]a>[A,h][X,h][Y,h] [P,8]`[SP,h] $[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h]
I don't know if anyone cares, but here is the Format Override that makes me really happy.
If it gets changed again, I'm not going to post it here anymore; I promise.
The ">" is like an arrow showing me where the accumulator byte is. X and Y bytes follow the accumulator byte. The "`" is from the key to the left of the 1 key (on the left of the keyboard). Needed a nonspace character that would allow a quick search for a stack pointer value. Searching
`ff will find only every instance in the trace log file where the stack is clear. And that character's small high diagonal mark makes the file more interesting for me.
All the extra space on each line allows more room for commenting.
I have finally tried this emulator's debugger more seriously (for rom hacking purposes). It seems incredible and feature rich, thank you so much for all this work! I'll keep recommending this emulator to everybody.
1a- One thing that I need a lot when rom hacking is to keep resetting data/code verification and I can't find a way to reset it easily. The reason I need this is that it makes it easier to find specific bytes related to the current moment in the game. I pause the emulator, I clear the code/data logs, then I frame advance or unpause/pause it to the point where I know that what I want has been read or executed. This way I can more easily find it in the debugger or on the memory editor.
1b- Also, a way to manually load and save cdl files that aren't the same name as the rom, is this possible currently?
2- Another thing was that I felt like double clicking the code on the debugger should open the memory editor, but I imagine that I could get used to pressing F1, which isn't bad for the workflow, maybe even better than double clicking. It's just that using the mouse is more intuitive.
3- Also, I am used to selecting savestate slots and then having a hotkey to save to current slot and another to load from current slot. That way I can work on a single savestate really quickly with the load and save hotkeys very close to the buttons I use, giving equal ergonomic usability to all slots. I understand this is a personal preference, but maybe if a lot of people chime in about this, it would show that it is an important feature. I see it has "select next slot" and "select previous slot" but I really wish there was a hotkey for each selectable slot (1 through 9).
4- One more thing: is there any way I can rearrange the elements in the middle of the Debugger window so that I can fit the memory editor, the emulator and the debugger on a full hd display all at once without overlapping? Example of the overlapping:
https://i.imgur.com/KbyFXJA.png5- Also, notice the unused space on the left of the address on the left column after I made the font smaller. I can't move the addresses to the left to occupy that area.
6- Sorry for so many things! One more: is there a way to save the windows positions so when I reopen mesen my workspace is the way I left it? The debugger window and memory editor window seem to always open in random positions.
nesrocks, click Options>Preferences then click "Shortcut Keys" at the top and scroll down a bit and there's "Save State -Slot #" and you can set up to two different key presses for saving to slots 0 through 10. Below that is "Load State - Slot #". Hope this helps you with issue 3.
I was able to assign the Pause key for pausing the game along side the Esc key. So happy!
Thanks! But... That's not how I use it with other emulators. I use numbers 1 through 9 simply to select a slot (not save or load), and then I use Q to save to current slot and E to load from current slot. I don't want the savestate to be saved or loaded when pressing the numbers, I just want to select it. So the numbers work more like a savestates manager. Reason being: I more often save and load than I change active savestate slots. The closest to this workflow that mesen has is the increase/decrease savestate slot hotkeys, which is how I set it up for now (1 decreases, 2 increases, Q saves and E loads), but it would be just better to select the slot directly with a number key.
I use tab for pausing and left control for frame advancing. This goes more than 10 years back from my tool-assisted speedrun background and it's how I've used emulators ever since.
Thanks for the feedback!
Some answers:
1/1b) In the debugger's menu, in Tools->Code/Data Logger, you can reset/load/save CDL files. If you want to reset the CDL data for a specific portion of PRG, you can select it (in the code window or hex editor) and then do right-click -> mark selection as -> unidentified code/data. Do you need something more beyond this?
2) Double-clicking is already used for a couple of others things, so I can't easily make it do another one on top of that :\ Double-click on the address in the left margin will add/edit a label, double-click on an address/label in the code will scroll to that address. You can remap the F1 shortcut to anything else if that helps (but not mouse buttons)
3) I can add some more shortcuts to select a specific slot if that helps, though I may have to start categorizing them because the list is starting to be rather long...
5) The issue you're having with the margin (and overall weird looking code window) is very likely due to selecting a font that's not a monospace font. The UI will let you select any font, but it will only work properly with monospace fonts - try Courrier New, Consolas (this is the default), Lucida Console, etc, and it should look much better, even at font sizes (you can download & install monospace fonts easily enough if you don't like any of the ones you have installed).
4) If you reduce the font size in the hex editor's window, you should get pretty close to fitting all 3 windows side-by-side in 1920px. The debugger's minimum size is a bit high for no good reason when using smaller font sizes - I could probably make it about 100-200px smaller without too many issues.
6) I know the main emulation window remembers its position, but that might be the only one - I think everything else only remembers their size. I'll make each window remember it's position, too.
Hope that helps,
Thanks for the information! I understand most of what I said are small things but I feel like they add up in the end when doing practical, lots of work. I hope this doesn't sound like I'm too nitpicky. Most of the issues are gone for me now, I'm just going to comment on all of them anyway.
1 - I hadn't seen that! I guess clicking the menu feels slow to me, but doing Ctrl+A, Ctrl+3 on the memory editor sounds fast enough. It beats having a window open just for this, like in fceux.
2 - Clicking the opcode on the debugger does nothing on my version. Maybe use that? Not the best solution because it would mean you can't go to EVERY byte, but close enough really. Anyway, hitting F1 isn't bad either. On that note, I find it interesting that you can edit the ROM bytes by looking at CPU Memory View. I'll have to get used to that too (F1 always brings to CPU Memory view).
3 - I may get used to the increase/decrease setup, since I don't change slots that often. So maybe it's ok for me like this (unless a massive mob of users chime in about this, it's probably just ok to leave it like that). As for me, I just need more testing, but I feel I can adapt easily.
4 - Indeed, I can almost fit it now with a smaller font. I'd say it's even good enough for use, although my OCD complains that there's empty space under "Input button setup" meaning all those elements on the middle column could be rearranged to make the whole window not as wide. So there's still some overlap.
https://i.imgur.com/eTPnwUd.png5 - True! Thank you! Maybe add a warning that mono-spaced fonts are recommended?
6 - Thank you! Remembering how the user left things is always great for improving the workflow. Reduces the amount of trouble adjusting everything between sessions. For example, I can notice that the memory editor resets to CPU Memory View on every session. I work most exclusively with PRG ROM View, so remembering that setting would be good (this clashes a bit with how F1 works by bringing CPU Memory view, but okay). But as a rule of thumb, remembering everything is the way to go.
No worries, it's by fixing the nitpicky stuff that you get a polished UI :p I don't actually use the debugging tools all that much myself (other than the times I'm trying to figure out why a particular rom doesn't work properly in Mesen, that is), so it's very helpful to have the opinion of people who actually use it for rom hacks or homebrew dev and the like.
2- I'll try to come up with something for this. At the moment, pondering the idea of adding an icon on the right-hand side of the current code row that could be used to open the memory tools at that location (and potentially other things?). I'd only show the icon for the row below the mouse cursor or such. Not quite sure just yet.
3- I'll keep it like it is for now then, and see if other people ask for the same thing or not.
4- That's mostly due to the fact that I try to make each tool fit in 1024x768 res, maximum. Which leaves about 700ish pixels when you factor in the taskbar. So all that blank space can't actually have anything that's permanently there, otherwise the UI would break on smaller screens. I just added a "Use Vertical Layout" option, though. This will move the label/function lists from the right side to the empty spot, which removes any empty space and makes better use of the available space at larger resolutions.
5- I added a warning/confirmation when changing the font for any tool that warns if the font doesn't appear to be a monospace font.
6- All debugger tools should now remember their last position, and I also made the hex editor remember the memory type setting. Like you said, though, this will conflict with the F1 shortcut, but there isn't too much I can do about that.
In general, all settings should be remembered. If you find anything that doesn't get saved, let me know, it's probably a bug.
Here's a build with the changes:
downloadLet me know what you think.
Sour, it seems that after creating something like
way more than 5 trace log files, Mesen starts creating trace log files with "Additional information (IRQ, NMI, etc.)" disabled, even if it is checked. The only solution I've found to correct this is to close and reopen Mesen.
"Additional information..." is important for me because it provides a nice seperation between frames. (I'm using a condition
pc != $c2ad && pc != $c2af to skip all of the frame filler at the end of each frame. So helpful!
)
Is this fixable? It has happened like four different times.
p.s. Sorry, but I feel this is important. My Override is:
Code:
f[FrameCount][Align,8][A,h]|[X,h]|[Y,h] [P,8]`[SP,h] *[PC,h]:[ByteCode,11h]|[Disassembly][EffectiveAddress] [MemoryValue,h]
My Windows Notepad uses the Courier New font and after searching through Windows Character Map the
* character is actually Unicode character
U+08EF: Undefined in Courier New, but wasn't sure how to print that character here using this tablet. That character is important to me because it is a tiny mark that appears underneath the first character of the PC and that lower mark straightens each line (the
` messed up each line, for me, before adding U+08EF).
Note: there's a name for characters that appear below or above other characters, but I don't remember the name combining markedit.final edit.
Thanks for the report! I was only able to reproduce this once, but I'm pretty sure I've fixed it.
You can grab the latest automatic dev build with the fix from appveyor here:
https://ci.appveyor.com/project/Sour/me ... /artifactsLet me know if you still get the bug with this build.
Thanks Sour! I will try using it today and report back with my experience. After downloading it and opening the .exe AVG scanned and reported that I "have discovered a very rare file" and they took it to their offices to run a detailed virus scan. They said it would be around 79 minutes before they could converse futher. I'm sure your file is virus clean, but thought you would enjoy reading their message about your excellent program.
AVG Virus Lab just sent
Quote:
AVG Virus Lab says: This file is clean. (check mark) MesenDevWin-0.9.5.120.exe
so I'm going to start using it now, but my feedback will probably be tomorrow. : )
Anti-virus software these days are a bit paranoid - executables that aren't signed or haven't been seen/scanned by other computers using the same AV software will often end up being flagged as potentially dangerous (even though the AV actually can't find anything dangerous with the file itself).
Obviously in this case, any Appveyor build will likely end up being flagged. To be honest, I'm surprised the other builds I sent you before didn't get the same treatment.
I feel helpless in my attempts to prevent the newest Mesen from talking about a file for recording movies on startup. I don't want it to create movie files; is this something new?
unregistered wrote:
I feel helpless in my attempts to prevent the newest Mesen from talking about a file for recording movies on startup. I don't want it to create movie files; is this something new?
Your description is a bit vague, so hard to say for sure. The only thing I can think of right now is.. are you pressing the Pause/Break key on your keyboard? If so, you might be triggering the internal "test recording" mode I have for my own use (which appears to be active in the appveyor builds, by mistake - it's usually disabled in any build I release manually)
Yes, just figured out if I use the Esc key pausing works normally!
Hope the "test recording" mode goes away sometime because I enjoy using the pause key for pausing the game. Maybe you could make it to disable "test recording" if the pause key is bound in your shortcut keys binding section?
edit: Or, could you add your "test recording" mode to the short cut keys list and then we could unbind the pause button for that if we would like to? Thought maybe that suggestion would be easier on you and Mesen.
If you grab the latest Appveyor build, it should be fixed. The test recording mode is disabled on appveyor builds now (like it was meant to be in the first place)
Sour, so far your newest Mesen graciously allows me to use the pause key and hasn't ever disabled your "Additional information (IRQ, NMI, etc.)" checkbox!
Sorry for my late reply.
Note: I have noticed that your
Format: Override code must always write the entire line on every line it affects because lines that don't use the
[EffectiveAddress] [MemoryValue,h] have two spaces at their end. The first space must be always added after
[Disassembly] and the second space is one I kept from your example. On lines ending with an EffectiveAddress only one space is left at the end. Fix if you want to; just notifying you.
Regarding my previous post, if you decide to fix that maybe you could create three seperate strings and then write the appropriate one. Just more thoughts from me, who is a newbie focusing on one machine, to you, who writes an extreemly excellent emulator that can be used on many different machines.
Is the issue just having extra whitespace at the end of the line? If so, it's not really that much of an issue, but I guess there's no harm in removing the whitespace at the right side of each line. If the tags are used in the middle of a line there isn't much I can do about it, though.
Yeah, it's not really an issue, I agree, but, I just wanted you to be aware of it.
And, if those things are in the middle of a line, and you still want to fix this, it seems to me that it would be somewhat fixable if you just built seperate strings and then, for instance, used the string without the space following
[Disassembly] for lines that only use
[Disassembly] (implying they don't use
[EffectiveAddress] [MemoryValue,h]). And you could add a space at the end of
[EffectiveAddress], then that extra space we have following it could be removed and that wouldn't be a problem either.
Though, then you would have to remove that added space in another string and print that string if the line doesn't have a
[MemoryValue,h]. So you would need to have 3 seperate strings to choose from.
That may still be problematic for those who place Disassembly, EffectiveAddress, and MemoryValue in the middle of a line... but it sounds good to me thinking about it. Sorry, I can't test it.
To be honest, I'd rather not needlessly increase the complexity of the trace logger just to fix a couple of whitespace issues, though :p
I'll most likely add some code to trim the whitespace at the end of lines, just to avoid making the logs larger than they need to be, but beyond that, if you're using these in the middle of the line (like the default format), you will want to use [Align] to keep things aligned in a readable manner, anyway, so it shouldn't really matter much at all.
Sour wrote:
To be honest, I'd rather not needlessly increase the complexity of the trace logger just to fix a couple of whitespace issues, though :p
I'll most likely add some code to trim the whitespace at the end of lines, just to avoid making the logs larger than they need to be, but beyond that, if you're using these in the middle of the line (like the default format), you will want to use [Align] to keep things aligned in a readable manner, anyway, so it shouldn't really matter much at all.
^Brilliant, thank you so much!
Sour, I guess it's apparent, but I'm not one who thinks of everything relevant.
You are extremely great at doing that!
(I totally forgot about Align.)
Hmm, I wanted to use mesen to debug some mmc3 irq splits, but instead I got a surprise. Never saw this glitch before (big yellow rectangles).
Attachment:
trophy_001.png [ 4.02 KiB | Viewed 2257 times ]
Let me know if I could privately supply a ROM or anything of that nature. Don't see this on fceux, nestopia, nintendulator, NES or AVS.
Is this with 0.9.5 or one of the recent appveyor builds? If 0.9.5, Mesen is pretty mean on state initialization for MMC3 and that might be your issue. If it's a recent build, it mimics FCEUX for its default initialization (mostly because some MMC3 clones appear to expect that state at power on) but adds an option to randomize the mapper's state at power on (in Emulation->Advanced).
Either case, it could very well be an issue with Mesen (e.g maybe some heuristic is detecting the game incorrectly) - feel free to PM (or e-mail) me the ROM and I can take a look on my end.
Sour wrote:
Is this with 0.9.5 or one of the recent appveyor builds? If 0.9.5, Mesen is pretty mean on state initialization for MMC3 and that might be your issue. If it's a recent build, it mimics FCEUX for its default initialization (mostly because some MMC3 clones appear to expect that state at power on) but adds an option to randomize the mapper's state at power on (in Emulation->Advanced).
Either case, it could very well be an issue with Mesen (e.g maybe some heuristic is detecting the game incorrectly) - feel free to PM (or e-mail) me the ROM and I can take a look on my end.
I think I may have inferred what could be going on. It appears as though in some cases the game is behaving like it is in vertical mirroring mode, however the game exclusively uses horizontal mirroring. I realized my game never writes to $A000 with a "1", yet it has been working just fine in several emulators, NES and AVS despite this (not counting other bugs I'm looking at right now).
GradualGames wrote:
Sour wrote:
Is this with 0.9.5 or one of the recent appveyor builds? If 0.9.5, Mesen is pretty mean on state initialization for MMC3 and that might be your issue. If it's a recent build, it mimics FCEUX for its default initialization (mostly because some MMC3 clones appear to expect that state at power on) but adds an option to randomize the mapper's state at power on (in Emulation->Advanced).
Either case, it could very well be an issue with Mesen (e.g maybe some heuristic is detecting the game incorrectly) - feel free to PM (or e-mail) me the ROM and I can take a look on my end.
I think I may have inferred what could be going on. It appears as though in some cases the game is behaving like it is in vertical mirroring mode, however the game exclusively uses horizontal mirroring. I realized my game never writes to $A000 with a "1", yet it has been working just fine in several emulators, NES and AVS despite this (not counting other bugs I'm looking at right now).
Bam, that was it. I wrote a "1" to $A000 and Mesen is happy now.
Yea, that's the behavior I would expect if you don't initialize $A000. Other emulators might be defaulting to the mirroring flag you set in the header - that flag is completely ignored for MMC3 as far as Mesen goes.
Are you testing on an authentic MMC3 chip (e.g not a repro of sorts), or on an Everdrive/Powerpak? I don't believe the MMC3 gives any guarantees as to its power-on state as far as mirroring goes (at least, the wiki says nothing about its power on state)
Sour wrote:
Yea, that's the behavior I would expect if you don't initialize $A000. Other emulators might be defaulting to the mirroring flag you set in the header - that flag is completely ignored for MMC3 as far as Mesen goes.
Are you testing on an authentic MMC3 chip (e.g not a repro of sorts), or on an Everdrive/Powerpak? I don't believe the MMC3 gives any guarantees as to its power-on state as far as mirroring goes (at least, the wiki says nothing about its power on state)
I have been testing with a PowerPAK, but never observed this issue, so I wonder if it is reading my iNES header like an emulator might? I mean I guess it must be in order to program the mapper before booting the game.
This was my first time trying Mesen this evening, and I just learned how to import ca65 debug files. Um...
THANK YOU.
No problem - that's what it's there for :p
Also, if you haven't noticed it already, there's a "Source View" toggle in the right-click options in the code, which should allow you to debug the original source code rather than the disassembly.
If you're trying to debug the timing of PPU writes, the "Event Viewer" might be useful too, if you haven't looked at it yet.
ok but is it compatable with gamepads?
It might be desirable to explicitly define in NES 2.0 whether for boards with mapper-controlled mirroring, the iNES header's Mirroring bit is supposed to be ignored or is supposed to denote the initial mirroring state.
NewRisingSun wrote:
header's Mirroring bit is supposed to be ignored or is supposed to denote the initial mirroring state.
I really want to believe that we won't find two otherwise-identical variants of hardware that do that.
Yeah, I'm pretty sure that this information can be inferred from the mapper...
Somebody must be thinking otherwise, because a great number of Mapper 4 ROMs out there are set to Vertical Mirroring (the "bit set" setting), and I'm not talking about Mapper 206 games that are still set to 4.
I can think of one physically motivated interpretation of the iNES mirroring bit on carts with mapper controlled mirroring:
- If the chips used for this game came from a batch where the mapper register happened to default to vertical mirroring, set it to vertical.
- If the chips used for this game came from a batch where the mapper register happened to default to horizontal mirroring, set it to horizontal.
NewRisingSun wrote:
Somebody must be thinking otherwise, because a great number of Mapper 4 ROMs out there are set to Vertical Mirroring (the "bit set" setting), and I'm not talking about Mapper 206 games that are still set to 4.
Do you have evidence that there are two different classes of games that rely on this difference? I'm not asking about whether people put a thing in the headers, but instead whether the information is relevant.
Of course it's not relevant for emulation, unless you are dealing with a ROM that should be mapper 206.
My interest is purely in having unambiguous NES headers. Given that iNES and NES 2.0 do not explicitly denote mapper-controlled mirroring, what should the mirroring bit be for those games? I don't like "Whatever, doesn't matter", because it means that the same game can have two different headers that are both correct.
I can live with either "Mapper-controlled mirroring means that Mirroring is not fixed to Vertical, so the bit must be zero" or "The mirroring bit should reflect the initial state of the mapper-controlled mirroring"; just pick one and make it official.
I would personally prefer the former.
Obviously there's a small fear that somewhere some hardware could possibly exist where manufacturers made two different otherwise-identical bits ICs that differ only in their mirroring powerup state and people released games that depend on this difference, but ... like I said, I really want to believe we won't find that.
Forcing the value to be 0 for mapper-controlled hardware in NES 2.0 sounds fine to me. At the very least, it imposes a single valid header for each game and seems better than forcing a default that might be incorrect.
Sour, does Mesen disallow running a previously recorded movie if the rom for which the movie was recorded has changed? I was able to capture a really rare bug (in my game) with a mesen movie, and I want to play the same movie again with a patch applied. I know Nestopia just pops up a warning when the rom changes; wondering if that's possible with Mesen or not.
GradualGames wrote:
Sour, does Mesen disallow running a previously recorded movie if the rom for which the movie was recorded has changed? I was able to capture a really rare bug (in my game) with a mesen movie, and I want to play the same movie again with a patch applied. I know Nestopia just pops up a warning when the rom changes; wondering if that's possible with Mesen or not.
It's not an option, but it probably should be one :p I added a similar option for save states a little while ago, that option should probably be changed to apply to movies as well as save states.
.mmo files are .zip files - you can edit the GameSettings.txt file inside the .mmo file and replace the SHA1 tag with the SHA1 hash for your new rom file (the entire file, including the header) and it should allow you to run the movie on the new rom.
Edit: If your movie contains a save state (rather than starting at power on), you will probably also need to enable the "Allow save states to be loaded on modified roms" option in Preferences->Save Data
Sour wrote:
GradualGames wrote:
Sour, does Mesen disallow running a previously recorded movie if the rom for which the movie was recorded has changed? I was able to capture a really rare bug (in my game) with a mesen movie, and I want to play the same movie again with a patch applied. I know Nestopia just pops up a warning when the rom changes; wondering if that's possible with Mesen or not.
It's not an option, but it probably should be one :p I added a similar option for save states a little while ago, that option should probably be changed to apply to movies as well as save states.
.mmo files are .zip files - you can edit the GameSettings.txt file inside the .mmo file and replace the SHA1 tag with the SHA1 hash for your new rom file (the entire file, including the header) and it should allow you to run the movie on the new rom.
Regardless of whether this is possible (looks like it is), I was able to find this rare bug using Mesen's debugger. I'm either going to give you a donation or a free copy of my new game when it's out, your choice.
Sour wrote:
GradualGames wrote:
Sour, does Mesen disallow running a previously recorded movie if the rom for which the movie was recorded has changed? I was able to capture a really rare bug (in my game) with a mesen movie, and I want to play the same movie again with a patch applied. I know Nestopia just pops up a warning when the rom changes; wondering if that's possible with Mesen or not.
It's not an option, but it probably should be one :p I added a similar option for save states a little while ago, that option should probably be changed to apply to movies as well as save states.
.mmo files are .zip files - you can edit the GameSettings.txt file inside the .mmo file and replace the SHA1 tag with the SHA1 hash for your new rom file (the entire file, including the header) and it should allow you to run the movie on the new rom.
Edit: If your movie contains a save state (rather than starting at power on), you will probably also need to enable the "Allow save states to be loaded on modified roms" option in Preferences->Save Data
Hmm, tried updating the sha1 hash...changed that setting...still wouldn't play for me.
GradualGames wrote:
Hmm, tried updating the sha1 hash...changed that setting...still wouldn't play for me.
I'll have to test/fix this on my end after work, shouldn't be hard.
GradualGames wrote:
Regardless of whether this is possible (looks like it is), I was able to find this rare bug using Mesen's debugger. I'm either going to give you a donation or a free copy of my new game when it's out, your choice.
Always nice to hear people are finding Mesen useful! (it justifies the 2-3 thousand hours it took to code it :p)
I'm actually unsure either one of my NES consoles is still in working condition (haven't used them in a good 10+ years I'd say..), though, but I'll take whichever works better for you (and nothing at all is fine, too!)
I just pushed a small change that should allow your use case to work.
With this change, if the "Allow save states to be loaded on modified roms" is turned on and the currently loaded ROM has the same filename as the rom originally used to record the movie, it will ignore the hash check & playback the movie using the current rom.
An appveyor dev build with the change should show up here in a few minutes:
https://ci.appveyor.com/project/Sour/Me ... /artifactsThis is mostly a temporary solution, I'll have to come up with a less cryptic solution (e.g actually adding a UI to start movie playback with an option to use the current rom, regardless of its name or hash, rather than trying to find & load a rom with the correct hash)
Sour wrote:
I just pushed a small change that should allow your use case to work.
With this change, if the "Allow save states to be loaded on modified roms" is turned on and the currently loaded ROM has the same filename as the rom originally used to record the movie, it will ignore the hash check & playback the movie using the current rom.
An appveyor dev build with the change should show up here in a few minutes:
https://ci.appveyor.com/project/Sour/Me ... /artifactsThis is mostly a temporary solution, I'll have to come up with a less cryptic solution (e.g actually adding a UI to start movie playback with an option to use the current rom, regardless of its name or hash, rather than trying to find & load a rom with the correct hash)
That was fast! Thank you sir. I'll get a chance to try this out in a couple days.
I thought of something today which I might really like. Or, if there are lua scripting hooks for it, that'd be more than enough for me:
I want to launch Mesen with my rom playing and recording a movie with a given name, every time (probably a time stamp as well as the rom name).
There are many times I've noticed something odd happening in my game and I wrote it down in text but wasn't able to get back to it for a few weeks; and when I get back to it, found the bug hard to reproduce. If I had a movie for EVERY emulator session testing my game, then ANY time anything odd happened I could archive a copy of the current rom, git hash, and mesen movie so I could easily go back and reproduce it.
I guess all I'd need to accomplish this is a command line argument to Mesen saying to start recording a movie to a given file right away. Maybe there already is, I'll RTFM after this post.
GradualGames wrote:
If I had a movie for EVERY emulator session testing my game, then ANY time anything odd happened I could archive a copy of the current rom, git hash, and mesen movie so I could easily go back and reproduce it.
I guess all I'd need to accomplish this is a command line argument to Mesen saying to start recording a movie to a given file right away. Maybe there already is, I'll RTFM after this post.
Go all the way and have a hotkey that saves the last 30 seconds as a movie, like a game console "share" button. :p
Though, that would require the emulator to be keeping a buffer all the time if that feature is enabled, but it's already going to have to do something similar when rewind is enabled.
I don't know if this has been asked for yet but when I was developing Nova the Squirrel I would have found it helpful to break when attempting to display uninitialized OAM after OAM has been left to decay, but of course then you need to decide on an amount of time to have the "OAM decay" kick in after.
GradualGames wrote:
I want to launch Mesen with my rom playing and recording a movie with a given name, every time (probably a time stamp as well as the rom name).
NovaSquirrel wrote:
Go all the way and have a hotkey that saves the last 30 seconds as a movie, like a game console "share" button. :p
My todo list actually has had a "Record movie from rewind data" bullet point in it for the past 1+ year. It's probably not all that hard to implement, either. I'll try to take a look when I get the chance. (There's no way to record a movie from the command line or from Lua at the moment, either)
NovaSquirrel wrote:
I would have found it helpful to break when attempting to display uninitialized OAM after OAM has been left to decay, but of course then you need to decide on an amount of time to have the "OAM decay" kick in after.
There is an option to emulate OAM decay in Emulation->Advanced. This option sets OAM bytes to $FF if the byte (or specifically if that particular group of 4 bytes) hasn't been read/written to in over 3000 CPU cycles (~1.7ms), which means the sprites disappear from the screen. I could add an option to break whenever a read to OAM occurs after that 1.7ms delay (it would require enabling the OAM decay feature though)
Quote:
sets OAM bytes to $FF if the byte [...] hasn't been read/written to in over 3000 CPU cycles (~1.7ms), which means the sprites disappear from the screen.
At least for my personal NES, all the bytes seem to decay to $10 instead of $FF, moving them blatantly into the visible portion of the playfield.
Sour wrote:
specifically if that particular group of 4 bytes
Should be a group of 8 bytes.
lidnariq wrote:
At least for my personal NES, all the bytes seem to decay to $10 instead of $FF, moving them blatantly into the visible portion of the playfield.
My own NES puts decayed OAM entries on-screen too, so when I intended to clear OAM (but didn't run OAM DMA at the right time) I had an occasional single frame of glitchy sprites which Mesen wouldn't have helped me find even with that option. That's why I think a notice or break would be good.
lidnariq wrote:
Should be a group of 8 bytes.
Thanks!
Changed this to be based on groups of 8 bytes, and it gets set to $10 instead of $FF, which makes the sprites visible (assuming sprite $10 isn't empty).
Also added a "Break on decayed OAM read" option in the debugger. This option requires the "Enable OAM decay" option to be enabled (otherwise it will be grayed out). Between that and the break on uninit memory option, it should be able to catch most memory initialization issues. Additionally, the sprite viewer now takes into account OAM decay when displaying the sprites (previously it would show the contents of sprite RAM as if it hadn't decayed).
These changes are in my temporary VS Dualsystem branch at the moment, but all of this should get merged to master sometime today or tomorrow.
And I'm starting to feel like I need to add some sort of log that explains why the execution breaks, etc. It's easy to think the debugger is broken when it starts breaking on every other PPU cycle due to OAM decay.
The debugger is very nice (though a bit slow on my old comp). Thank you.
Is this the most accurate nes emulator?
Whats the bsnes of the nes world?
There isn't one best-of-breed NES emulator.
I'd say you'd be pretty well off with any of Nestopia, Nintendulator, puNES, or Mesen.
Although to be quite honest, if you're not doing development, FCEUX is plenty accurate.
Mesen right now finishes all those emulator test ROMs perfectly that others don't. Stock Nintendulator, thought of being the most accurate of the older emulators, still has CPU/PPU synchronization very slightly off. Don't know about puNES.
It's hard to say what's the "most accurate" - the reality is that many have similar accuracy and will run the majority of licensed titles without any issues (arguably, a lot of them might have "bsnes-level" accuracy in terms of NES emulation). For NES development, I try to make Mesen as "mean" as possible so that game devs can at least feel relatively confident that their code should work on a real NES, too.
As far as test roms, Mesen passes more or less all of them - but that doesn't mean it's "perfect". There are definitely a number of things (e.g stuff that no game would ever rely on) it doesn't simulate anywhere near perfection. Trying to emulate every single detail would incur too much loss of performance to the point where some older computers might have trouble running it (esp. with the debugger tools active). At this point, I try to limit any "improvements" to accuracy to either things that have little to no performance cost or things that actually matter in real games (licensed or not)
Hi Sour.
I'm not familiar with Mesen. I'd like to trace Rc Pro-AM II and compare the log with my emulator. This game is unplayable in RockNES. How do I generate a trace log?
Open the debugger (Tools->Debugger) and then the trace Logger (Tools->Trace Logger).
There are a few options to configure what's shown in the log (if you're using one of the latest dev builds you can customize the format).
Just press "Start Logging" to select a file & create a trace log.
I don't know if it's different in the latest dev build, currently using Mesen Version: DevWin-0.9.5.139, but, to open the Trace Logger I always had to click Debug>(the only option) to then click the Trace Logger icon inside of the Debugger that opened.
But, after clicking Options>Preferences and checking the last box, under General, "Enable developer mode",
then you just simply
reopen Mesen, click Debug>"Trace Logger" and have fun trace logging.
My penultimate post in this thread, I think, holds my current "Format: Override" line, if you are interested.
edit: Sorry, it's in the second post from the top of page 26. Just want to correct my mistake. : )
final edit.
I will now post in this thread since it is related to emulator specifically. For now I'm not able to make it work on mac yet. With wine bottler 2.0, it can start but the screen shows nothing. I still think is my computer that is missing something so I will try to test more and share my results.
One question I have is I was able to test the debugger, which is nice when you load the symbols from the dbg file but I don't seems to find anything that show the current sprite, pattern table etc. Either I missed a menu or mesen doesn't have that yet?
There is a display bug on linux, text gets overlapped for the imported symbols. I will share a screenshot once I have the time to test it again.
Thanks for the effort in trying to get it to work on macOS - I'm definitely interested in figuring out a viable way to run it (whether it's Wine or Mono). I'll try to make a SDL build for Windows (I should be able to reuse most of the Linux code for this) - that might help fix the display issues (e.g by taking DirectX out of the equation).
The debugger needs a monospaced font to display properly (this is probably what's causing the issues on your end) - under Linux it should usually try to install and use that one by default, but this might not work for all setups. You can manually select a monospaced font in Options->Font Options->Select Font, which should solve the display issues (you will have to select the font separately for each debugging tool)
There are a bunch more tools (documentation is available
here) beyond the main debugger available in the debugger's "Tools" menu. You can also make them available in the main window by turning on "developer mode" in Options->Preferences (from the main window). The PPU viewer is used to show the PPU's nametable/chr/sprite/palette.
Edit: On macOS, are there any errors shown in Tools->Log Window after you try to start a game?
I knew I saw them somewhere! My bad, I think I need to learn to read the manual
As for the display issue, yep, it was the font. The default set on my distribution was deja vu sans so once I selected my favorite mono font everything was fine. There is so many new options now with the developer mode, I feel like a kid at a candy store
Sour wrote:
Edit: On macOS, are there any errors shown in Tools->Log Window after you try to start a game?
I didn't check that since I Was not aware of that option. For now I only tested with wine (3.0 with brew/2.0 wine bottler) if I could start it on not. Bottler with some .net framework did start but no screen. Now with all the new developer options, I really need to put more time to figure out the cause. I will share the logs once I can access my home computer tonight.
So I did a bit of Wine testing on my Ubuntu 18 VM using Wine 3.0 (32-bit mode) and .NET 4.6 (installed by using winetricks).
Mesen 0.9.5 runs - sound is fine, graphics are scaled incorrectly (~80% of the viewport is black - potentially a Wine DX11 bug?)
I also built a (terrible) 32-bit SDL2 windows build that still uses DirectX for gamepad input, but uses SDL2 for the graphics/sound.
This one shows the video properly, but the sound seems sort of broken (it's a VM though, so that could potentially be a part of the problem). You can grab this build
here if you want to see if it works any better than the DX11 build on your end.
In both cases, though, the debugger window seems to be somewhat sluggish (at least, compared to using Mono on the same VM) - hopefully this might be partially caused by the VM rather than Wine. The emulation itself is running around the same speed as when I run the same SDL build directly on windows 10, though.
This what the issue I had with wine on 18, the graphics were not scaled properly. As for debugging, I remember being sluggish even on a i5 8th gen 4 core so maybe it was not the VM but wine per se. I will re-test the debugger with mono then wine and see how sluggish they are. But since I never really tested on windows with the same specs, it's hard for me to judge how they are supposed to be when they are working properly.
I will test this build and see how it goes.
edit:
Maybe I should try this build on osx too?
edit2:
I did some quick tests during lunch and:
- the original build is fast with mono
- the original build when launched with wine, is horridly slow
- the sdl build when launched with wine is horridly is slow too
I will test both build on the mac to see if they have performance issues.
Edit3:
I'm starting to do some test on the Mac. With Mono 5, for now it fails so more research is necessary. I tried it on windows and the speed is light and day, even on a 10 years old computer so now I can confirm that there is some performance issue with mono, except that it is still usable. Some of he pop-up that shows information are a sluggish but usable. The debugging itself is fine for now but like I mentioned it, I was testing it on a recent computer so maybe I should try on that same 10 years old computer with linux and see the performance impact on older machines.
I will try to do more research for mac. For now I cannot confirm if the cause is my configuration or the emulator with wine/mono that have issue yet.
Edit4:
Was able to install .net 4.5 with wine 3 but after starting, wine fails. I think I will try to reinstall wine 3 since I was never able to use winetricks with the UI properly anyway.
For wine 2, this is the error:
D3D11CreateDeviceAndSwapChain() failed - Error:-2147467259
For sdl version I have a different error. It says something when launched (screenshot):
Attachment:
Wine2_sdl_version.png [ 71.98 KiB | Viewed 656 times ]
The application launch after but it doesn't do anything when you load a rom. With the normal version, at the least the sound is working. Both have this issue when launched (maybe this is nothing):
Attachment:
io_error.png [ 51.11 KiB | Viewed 656 times ]
Now it's 2h in the morning and I need to work tomorrow so I think I should go to bed and continue testing another day
Banshaku wrote:
For wine 2, this is the error:
D3D11CreateDeviceAndSwapChain() failed - Error:-2147467259
I'm assuming this might just be because that version of wine does not support DX11 properly?
Quote:
For sdl version I have a different error. It says something when launched (screenshot):
This one's a bit odd. InitDll is a function that was added relatively recently (but it should be present in the SDL build). At least, as far as I can tell, the SDL build works properly in both 32-bit and 64-bit environments on my end (I'm assuming you're running Wine in 32-bits?)
Quote:
Both have this issue when launched (maybe this is nothing):
That doesn't sound too good - Mesen's .exe contains an embedded .zip file which contains a number of resources/DLL files. It looks like the code is crashing while attempting to extract data from the zip file.
Thanks for taking the time to test all of this. If you can't get Wine 3 to work, it might be worth trying to get the native Mono+Linux+SDL build to compile on macOS (e.g by running make). The only major issues that should exist are 1) the code in FolderUtilities.cpp that uses the experimental filesystem API (which isn't available in the version of clang macOS currently ships with) and 2) the code to handle gamepads (which relies on Linux-specific stuff). Both of these are pretty easy to remove.
Also, I'll try to profile the code when it's running under Wine to see if I can find any obvious performance bottlenecks that I could work around.
Wine's DX11 support is still very beta and still only marginally useful. :/
I was testing with a 32-bit profile because I think you mentioned something about it. I can redo the test with a 64-bit one.
I will try to reinstall wine3 to see if I have any better luck with it. As for dx11, what feature of dx11 does mesen uses? Is there a way to do a quick test with a lower one to see if it does any difference on the mac with wine? If not then that's ok too.
As for compiling, I can see once I can find the time. Regarding clang, if it does require a newer version, it may be possible to install it with brew.
So, a shiny new macOS VM and a lot of "I have no idea what I'm doing on macOS" moments later, I got this:
Attachment:
mesenmacos.png [ 207.84 KiB | Viewed 1138 times ]
(this is mono 32-bit on macOS, running a native macOS build of Mesen)
Now, as a screenshot, it looks kinda nice. But in reality it's mostly unusable, and I don't think I can hope to fix it.
First off, Mono Winforms on macOS runs on Carbon, which is 32-bit only, so even if I were to make it work now, it will stop working in a year or 2 once macOS drops 32-bit support. So aside having to make a few small modifications to the code (2 files) to get it to build, I also had to manually build SDL2 to get the 32-bit version, and manually install mono (because brew only installed the 64-bit stuff, it looks like).
Beyond that, SDL currently crashes if I try to use the Mono window to display the video (which is why there's an extra window in the screenshot for video output). Mono WinForms in general appears to be pretty buggy (crashes that don't occur in Linux/Windows, a lot of refresh issues, esp. in the debugger window, etc.) - I'm not convinced that there are ways to fix all of these. A Cocoa port of the WinForms code is kind of being worked on, from what I could gather, which could potentially make WinForms work a lot better (and in 64-bits). But I'm not holding my breath on that one - it might be a long time before it's done (if ever), and even if it gets done, it might not fix the myriad of winforms issues.
Now, as for Wine on macOS:
-The DX11 build doesn't display anything, but it runs (Wine 3.0.2, installed with brew, with dotnet45 installed with winetricks).
-The 32-bit SDL build seems to work (the frame rate says 60 fps, but I'm only getting like 0.5fps, I'm hoping this is because it's a VM running without hardware acceleration for graphics, etc.)
The main debugger window is still slow, but the rest (trace logger, ppu viewer, etc.) seems to be working decently well. There is likely a specific thing about the debugger window that's causing it to be so slow under Wine, I'll have to try and figure that part out. Overall, it's much better than trying to run WinForms on Mono.
Attachment:
winemacos.png [ 144.97 KiB | Viewed 1138 times ]
(this is a windows build of Mesen running on Wine 3.0)
So, TLDR:At the moment, I think the best way to run Mesen on macOS is probably Wine 3.0 (32-bit)+Windows build of Mesen that uses SDL instead of DirectX. There's probably something broken with your Wine 3 setup if the SDL build from yesterday doesn't launch properly on macOS.
I'm unsure why Mesen's DX11 usage doesn't seem to work properly in Wine - Mesen doesn't exactly do any advanced stuff, I'm essentially just displaying a texture on the screen and that's it. But I'm using the DirectX Toolkit for some stuff (for no good reason other than it was simpler that way when I first implemented this 4 years ago), so maybe there's something in there that Wine doesn't like. It might be simpler to just scrap the DX11 code and always use SDL, but I'm not sure SDL performs better than just using DX11/DirectSound directly on Windows (e.g in terms of audio latency, etc.)
I kind of had a feeling that my wine 3.0 was not working as expected when I had issues with Winetricks not installing anything with the GUI but was working fine on linux. Now that completely confirms that I need to reinstall it. Sorry for all the troubles you went through. I will see if I can reinstall it tonight and figure out the cause. Usually I don't have any issues with brew so something must have gone wrong during the installation. I will try to re-install it.
As for DX11, I don't think that it's your code that is the issue, but more what lidnariq said about the state of DX11 inside wine 3. If you look at the logs when using wine 3, there are so many of them that seems like some quick workaround to make apps work that I'm not surprised about it
As for SDL, other emulators uses it and it's a pretty standard framework so if you can make it work then it would be a plus to be "free" of directx
Or maybe there is an older version of the DirectX Toolkit that uses a older version of directx?
Thanks again for the test, we are getting closer to a workable version of mesen on mac. Once wine3 works fine on my computer I will let you know how is the performance compared to linux.
Edit:
answering my own question, there is no DirectX Toolkit for DX10. It was done for DX11 and now there seems to be one for DX12.
I don't know how much work it would requires but if we could select the rendender/sound as a setting then you could keep the DX11 version for people that prefer it on windows and other people could use the SDL one for mac, linux etc. But since I do not know your code base, it's hard to judge the effort to do so.
edit2:
I re-installed wine3 with brew and I have the same results. I rebooted the computer, just in case, but it did make a difference. So there is something wrong my current computer I don't know the cause. I will the investigate it later. The last one I tested is the one you built with SDL and linked to this thread. How I launched it:
Code:
WINEARCH=win32 wine Mesen.exe
The current prefix is 32 bit and dotnet45 was installed in it.
Supporting both DirectX & SDL wouldn't be overly hard (though doing it properly, e.g by delay loading the DLLs, is something I have never done) - their use is limited to just 2 files. It's mostly having to embed the SDL DLL twice (32-bit & 64-bit) in the .exe that's a bit annoying, since it'll probably add another 1+mb to the filesize.
That's essentially how I run it on wine, too, though I only had a 32-bit prefix and nothing else installed. The VM I was using is macOS Sierra (not High Sierra), in case that could potentially be a factor. What error/crash do you get when trying to run it? Is there any sort of callback, or does wine just refuse to start the application at all?
I was able to take the information just before going to work.
I'm using the latest version of macOS, which means High Sierra 10.13.6. There is a probability that something changed between them that could cause the error since I had issues like that in the past but for now unless I retry it on another partition with 10.13 and 10.12 I cannot say it is the cause.
I included the screenshots of the error and the backtrace. I will try to search if wine 3 has some issues with high sierra today.
Attachment:
Program_error.png [ 84.79 KiB | Viewed 1076 times ]
Attachment:
progam_error_details.png [ 187.23 KiB | Viewed 1076 times ]
Just spent some time trying to get Wine to load MSVC's PDB files, with no luck.
It would have made it far easier to figure this out if I could have known what function is causing an exception. As it is, it looks like Mesen's C++ core is throwing an exception somehow, but that's about all I can gather from the trace.
Not quite sure what else to suggest at this point - it could potentially be a real issue in my code that's only causing a crash in your particular setup (but it could very well be a Wine issue, too.)
I did some quick research this morning and last year there was some issue with 10.13 with AFPS, the new file system. It was causing wine to fail but I do not know if this is still an issue.
One thing I could try would be to create another partition, re-install high sierra then test. If doesn't work, try sierra and test. If it only work in sierra then something is wrong with high sierra. But I don't think that I will have time to do those kind of tests for a while since I will be away for a week from home soon so maybe I will be able to test it next month, on my way back.
Hey Sour,
Did some experimentation with the ultrasonic frequencies of the low timers for 2a03 and MMC5. Seems that your emulation of MMC5 ultrasonic frequencies is inaccurate and mimics the method that the 2a03 does by halting generation at 0x07. MMC5 generates tone until it reaches 0x00 and then it halts. You can hear the click shutting the generation off in the recorded example from hardware by ImATrackMan.
Linked is a test that starts generation of pulses for 2a0x (both channels) and MMC5 (both channels) at 0x0D and decreases the low timer values of the channels down to 0x00.
Seems a bit obscure, but I thought you might be interested.
Also, I don't think the proper frequencies in other emulators for MMC5 ultrasonics are emulated properly. Hopefully the attached render helps with proper frequencies.
https://cdn.discordapp.com/attachments/352252932953079813/469390090184032266/ultrasonic_tests.zip(I attached this in the NESdev Discord channel.)
I just realized that the muting of the pulse channels under periods of $8 has to do with the sweep unit's behaviour. And as the MMC5 do not have a sweep unit it does work below $8 I guess? I don't know if emulating that behaviour is suitable for Mesen though.
Not certain how it would have to do with the sweep register since the code is using direct writes to the timer. Mesen is emulating the behavior of the 2a0x pulses by stopping generation at >0x08 timer on both 2a0x and MMC5 pulse. MMC5 pulses do not stop generation >0x08. Dunno how emulation accuracy only relates to certain emulators.
Can't check right now, but if it doesn't work properly, it must be a silly mistake (e.g maybe a bug that was introduced due to refactoring, etc.)
Both the lack of sweep units & the fact that it doesn't get muted when the period is under 8 are supposed to be taken in consideration. There are even comments in the code for both of these:
https://github.com/SourMesen/Mesen/blob ... Audio.h#L9I'll take a look tonight.
I recorded a wav from your NSF at 96khz sample rate. Comparing the result in audacity, it seems to be working pretty decently, as far as I can tell:
Attachment:
mmc5.png [ 10.69 KiB | Viewed 980 times ]
(top is your recording, bottom is Mesen)
My guess is that you were probably at 44/48khz, or you had the NSF option to skip to the next track after 3 seconds of silence activated (which caused the track to loop during the portion of your test that doesn't use the MMC5 square channels)
B00daW wrote:
Not certain how it would have to do with the sweep register since the code is using direct writes to the timer. Mesen is emulating the behavior of the 2a0x pulses by stopping generation at >0x08 timer on both 2a0x and MMC5 pulse. MMC5 pulses do not stop generation >0x08.
Me neither but it says so on the sweep page of the wiki:
https://wiki.nesdev.com/w/index.php/APU_Sweep wrote:
If the current period is less than 8, the channel is also muted. This avoids sending harmonics in the hundreds of kHz through the audio path. This muting cannot be overridden because it is based on the current period.
It doesn't explicitly say that the channel is muted by the sweep unit, or how it is done though.
Quote:
Dunno how emulation accuracy only relates to certain emulators.
Well Sour said himself that even though Mesen is aiming to be highly accurate, he still has to balance accuracy and performance.
Pokun wrote:
it says so on the sweep page of the wiki:
https://wiki.nesdev.com/w/index.php/APU_Sweep wrote:
If the current period is less than 8, the channel is also muted. This avoids sending harmonics in the hundreds of kHz through the audio path. This muting cannot be overridden because it is based on the current period.
It doesn't explicitly say that the channel is muted by the sweep unit, or how it is done though.
The first paragraph of the section states that the section is about two conditions that cause the sweep unit to mute the channel by outputting 0 instead of the channel's current volume.
Two conditions cause the '''sweep unit to mute the channel''': 0 is sent to the mixer instead of the current volume.
[...]
If at any time the target period is greater than $7FF, the channel is muted.
[...]
If the current period is less than 8, the channel is also muted.
I thought this would be clear enough to indicate these as the two conditions, but it appears it was not. I have changed it to use more repetitive language ("the sweep unit mutes the channel" in both cases) instead of judicious use of passive voice.
Thanks, that's what I thought but now there is no doubt. So it doesn't stop generation, it simply ignores the volume setting and sends volume 0 to the mixer.
So I guess the reason MMC5 pulse channels can produce ultrasonic sounds is simply because it has no sweep unit interfering with its volume value.
I dunno... It looks like there is a "click" when MMC5 low timer is set to 0x00. That seems like it's stopping generation and doing something with volume?
I'm pretty sure that's just a combination of the resampling + the fact that Mesen only supports up to 96kHz in its UI. A period of 0 should result in a ~112kHz sound, so it's very likely it wouldn't show up on a 96kHz recording.
Setting the sample rate to 384kHz by hacking up the code a bit gives me this: (Mesen at bottom)
Attachment:
mesen384000hz.png [ 11.32 KiB | Viewed 1617 times ]
Mesen is correctly producing sounds all the way down to period 0, as far as I can tell (and your 96kHz hardware recording seems to be missing some portions, which makes sense).
This should go without saying, but NES audio (except the 163 and VRC7) is unipolar. All sounds, including ultrasonic, have a DC component. The triangle halts instead of being silenced to zero, but any other channel, when set to an ultrasonic period, will produce audible clicks on every volume change.
If that's what Mesen is doing, it's 100% correct.
Those aren't volume changes, the attenuation in steps is due only to a change in frequency vs. the filters that are in place.
One thing about Mesen's debugger:
When stepping through code, the scroll position of the disasembly jumps up and down erratically. This makes it very hard to follow the code, it will not adjust scroll for a few steps then suddenly jump, and I lose my place visually and mentally.
FCEUX and Nintendulator have a very simple approach to this: every step or breakpoint scrolls the current line to the top of the view. Centre of the view would be equally good (or maybe even better), just anything that's consistent.
The logic used for the scroll position is pretty much identical to what visual studio does: if the next instruction isn't visible in the current view, scroll the view so that the line containing the instruction is in the center of it. Otherwise, don't scroll at all. This typically means that it won't scroll until you reach the bottom of the window, and then scroll back to the center of it (except when dealing with jumps/etc). I'm not completely certain whether this is what you're getting or not based on your description, though.
Either way, I could easily add an option to always scroll the current instruction to the middle of the viewport, if that works better for you?
Well, from that described intent this is clearly a bug then:
https://www.dropbox.com/s/c9j2jrdbcyeh3g9/mesen_disassembly_stepping.gif?dl=0 (sorry, I can't seem to upload this as an attachment right now)
This happens to me very frequently. Sometimes it does stay in place like you were describing, but it's just as often like this.
One thing that also happens is any branch that goes off the current visible page basically will scroll to a "random" location. It's very hard to follow the execution when I have to look to a new place on the screen every time it branches. I'm not sure if that's related to this bug, or if that's somehow part of the intended scrolling solution.
To have it always scroll to a fixed location for every step/breakpoint reached would be a more ideal solution than a fixed version of this for me, though. It's very easy to follow visually that way.
rainwarrior wrote:
Well, from that described intent this is clearly a bug then:
https://www.dropbox.com/s/c9j2jrdbcyeh3g9/mesen_disassembly_stepping.gif?dl=0 (sorry, I can't seem to upload this as an attachment right now)
This happens to me very frequently. Sometimes it does stay in place like you were describing, but it's just as often like this.
I've had this happen as well. Until now, I always assumed it was a fluke of mono on linux, and ignored it.
Yea, that's definitely a bug. It should either not scroll at all, or move the highlighted line to the center of the screen - anything else is incorrect. Since this particular section of code is writing to RAM, it's possible that this is somehow altering the way the ram section at the top of the window is displayed, which might offset the rest of the content (but this should only be possible in a limited number of scenarios).
Either way, I'll take a look and try to find a way to reproduce it on my end.
Might be worth mentioning that I am using a custom font, and also have the option to show unidentified code/data on as well, in case either of those might affect the ability to reproduce the problem. (I don't remember what the default font was, but after trying it the unidentified code option doesn't seem to affect it.)
The issue where the position jumps around when stepping through the code should be fixed (at least, I can no longer reproduce it on my end, on neither Windows nor Linux).
I also fixed some debugger-related memory leaks (in particular, there was a relatively bad one when the options to disassemble unidentified data and/or verified data were enabled) and improved the refresh performance for the code window by roughly 40-50% (e.g with the same settings 0.9.5 refreshes my test case about 12-13 times per second, whereas this build goes up to 18-20). Also, the split view mode now has almost no impact on performance (it drops performance by half in 0.9.5).
There's also a new option in the debugger's options menu: "Keep active statement in the center". This makes the code window behave like FCEUX/Nintendulator (except it keeps the active statement in the center, instead of at the top).
There's a fully-optimized windows-only build here:
https://www.mesen.ca/MesenJul25.zip@gauauu, you should be able to use this
Appveyor Linux buildLet me know if you still get issues with this build (whether it's with the new option turned on or not).
This seems to work very well now. Thanks! (Both modes, centre-following and not, are good.)
Thanks to the nice event viewer, I made a PAL version of my
palette test ROM.
Some things I've noticed using it for this and other stuff recently:
1. When working on the ROM, it keeps getting new symbols from the .dbg file but the old ones are not replaced. As code moves around and I continue working I get big piles of duplicate labels in the wrong location. Is there a way to prevent this?
2. When using the event viewer, if I hit a breakpoint, only events since the start of the frame appear. Is it possible to see the previous frame's events here still?
3. Showing CHR tiles with their last known colour can be a bit confusing when their last known colour was all black during a fadeout. It's easy to turn off, but I thought I had a bug that was failing to load some of my font tiles, but it turned out they just had not yet been seen with a non-black colour yet. I don't know what to suggest exactly here... it's a great feature to see them in their current colour, just this consequence of it wasn't obvious to me at first. (I dunno... maybe an option to override with default greyscale if a palette doesn't have 4 distinct colours?)
4. Debugger keys (e.g. F5 to continue) don't work in the event viewer. Have to click back over there to resume, then click back to event viewer, etc.
BTW having scanline as a breakpoint condition is really wonderful. I also love that I can see CHR-RAM data updated byte by byte while stepping.
Thanks for the feedback!
1) This was a bug I wasn't aware of - any identical label should now overwrite any older label with the same name. This should work well in most cases, but ideally, I should probably also add an option to reset labels to their default state whenever a DBG file is loaded (e.g otherwise labels that used to exist but were erased from the code will stay in the list, etc.)
2) There's a new option to show the previous frame's events. Up to the current scanline/cycle, it will show the current frame's events, then it will show the previous frame's (e.g for everything below the yellow bar). This only has an impact when execution is stopped. Did you mean that you wanted to see all events from the previous frame, though? (plus the current frame's on top of them?) This would be feasible too, I think, but it would probably make the event viewer itself somewhat confusing to look at?
3) The last known palette option now automatically uses grayscale whenever all 4 colors in the last palette are identical. This isn't perfect, but should take care of most fade out related issues, I think?
4) I'm not too sure what I can do here - each debug window has its own set of shortcut keys (also the number of shortcuts is relatively small in a lot of them). The event viewer is the only one that doesn't have any shortcut keys, though. I might be able to enable the debugger window's shortcut keys on it (only if the debugger window itself is opened), but I'm not sure how simple that would be, I'll have to check.
RE: Breakpoint conditions - let me know if there's anything else that you think would be useful to have in the list of available values. It's only a couple of lines of code to export any additional APU/CPU/PPU state value.
Build with changes/fixes here:
https://www.mesen.ca/MesenJul26.zip
Sour wrote:
1) This was a bug I wasn't aware of - any identical label should now overwrite any older label with the same name. This should work well in most cases, but ideally, I should probably also add an option to reset labels to their default state whenever a DBG file is loaded (e.g otherwise labels that used to exist but were erased from the code will stay in the list, etc.)
Hmm, I think in most cases where I would be using a DBG file, I'd want it to wipe the workspace's labels on every reload and just use what the DBG has. I guess the weird thing is using both of these features at the same time, the workspace and the DBG... actually maybe another idea would be an internal flag for labels that will keep track of whether they came from a DBG, and if they did it could be deleted on every DBG reload?
Sour wrote:
2) There's a new option to show the previous frame's events. Up to the current scanline/cycle, it will show the current frame's events, then it will show the previous frame's (e.g for everything below the yellow bar). This only has an impact when execution is stopped. Did you mean that you wanted to see all events from the previous frame, though? (plus the current frame's on top of them?) This would be feasible too, I think, but it would probably make the event viewer itself somewhat confusing to look at?
Ah, that's great, that's exactly what I wanted. I don't have a need to see two simultaneous frames, I just wanted to be able to see back one full frame from the current position.
Sour wrote:
3) The last known palette option now automatically uses grayscale whenever all 4 colors in the last palette are identical. This isn't perfect, but should take care of most fade out related issues, I think?
Well, the specific case I was trying to debug when I noticed it I was debugging a white on black font, which because it only uses 2 of 4 colours, it'd be invisible if e.g. only one of those colours wasn't the same. That was why my suggestion was if all four colours aren't different use greyscale. (Though to be honest, just turning this feature off and using always greyscale is pretty good anyway, so it's pretty usable without that.)
However, trying this out, I am seeing some really weird stuff:
Attachment:
strange_chr.png [ 141.8 KiB | Viewed 1321 times ]
This is during a faded out frame of my game, but note how a lot of the font characters are full black (all 4 black), which was what it looked like when I thought my font loading code had a bug in it. (That's kinda why I asked about this... one of the characters being all black intuitively seems like
my bug to me.) At the same time, half of these tiles are being displayed with weird random looking palettes that don't actually exist in my game.
(Maybe these tiles haven't been displayed at all yet, and the last used cache is just uninitialized data?)
If there is a default to prevent 4 blacks from being used, though, it isn't working for those missing tiles (those have 4 black palette entries in that screenshot). I didn't see a new option though...?
Another thought: in the tile info there is a palette shown but there is no way to query the NES palette index of these colours. I can hover for RGB, but I can't get like the NES hex code for it?
Yea, I agree using a .DBG file would usually mean you're not editing/creating your own labels on top of it (and so resetting makes the most sense). Keeping a flag internally isn't a bad way of taking care of it, either. I'll see what's easiest to implement.
Do you have the RAM init option set to random values? (that's the only way I found to reproduce this) If so, that's most likely why - it picks up the random nametable/palette data on the first frames and uses that as the last known palette for any tile that didn't get used after that. I'll add a check to make it ignore frames where rendering is disabled.
I should probably rewrite the logic I added to check the actual tile rather than the palette indexes. As it is, if the palette uses 2 different black indexes, it'll still end up being shown as all black instead of grayscale. Ideally if a tile's final output (e.g as seen on screen) is a single solid color, it should output it using a grayscale palette instead. That should work for your scenario too, I think? At the very least, it will make it impossible for a non-empty tile to ever show up as a single color in the viewer.
Good point about the color indexes in the tile info. At the moment it displays 0/1/2/3 because those are also shortcut keys to switch between them when drawing, but I'm not sure there's any real value to this (it's pretty obvious that the left one will be color 0, etc.). I should probably change this to display the indexes in the bottom right, like it does for all the other tabs?
Sour wrote:
Do you have the RAM init option set to random values? (that's the only way I found to reproduce this) If so, that's most likely why - it picks up the random nametable/palette data on the first frames and uses that as the last known palette for any tile that didn't get used after that. I'll add a check to make it ignore frames where rendering is disabled.
Yes, I do have random RAM. That makes sense. Ignoring while disabled should help, thanks.
Sour wrote:
Ideally if a tile's final output (e.g as seen on screen) is a single solid color, it should output it using a grayscale palette instead. That should work for your scenario too, I think? At the very least, it will make it impossible for a non-empty tile to ever show up as a single color in the viewer.
Yeah, I think that would work.
Though TBH maybe this request is overcomplicating it. Greyscale is probably better for debugging in general, and very useful as-is. Maybe there are times where it's worth knowing that a tile really was all-black last time it was rendered too... so I dunno.
Ignoring while rendering off is good, though, since it seems trigger bad recolorations very often during fadeouts in various games.
This build should fix most of the issues:
https://www.mesen.ca/MesenJul28.zip-Added an option to reset all workspace labels to their default when importing DBG/MLB files (in Workspace->Import Settings, enabled by default)
-The "last known palette" data now ignores bg/sprites palette data if bg/sprites are disabled (fixes the random ram init issues as far as I can tell)
-The color picker in the chr viewer now displays the palette index values instead of 0 to 3.
-There's a new option in the CHR viewer to display single-color tiles in grayscale (and I reverted the change I had done to the last known palette display logic)
-Fixed a couple of minor issues when using 8x16 display mode in the chr viewer
Let me know if you find anything else!
Ah, trying this out and it actually seems to work really well! "Single colour tile" seems like a weird condition, but it does look like it's a practical one.
Interesting test case 1: in the second level of Sunday Funday you can switch the lights on and off. Pretty neat to watch the tiles go grey when the lights switch off.
Interesting test case 2: in my own game Lizard (or its demo) there's a fade between screens, which will fade the current screen's tiles to black (or now grey), sort of like marking them as inactive, lets me see what's being used in the current scene.
Am I correct in seeing that it picks up tile usage colours for the whole nametable space, and not just the onscreen rectangle? It seems to pick up colours for things in the background layer before they come into view. Originally I had assumed it was only doing it for tiles that are used during rendering. (Not sure if either way would be better or worse, just noting the difference.)
Unlike the other CHR viewer options, default palette setting isn't a saved option, always resets to 0 when I restart the program.
I suppose one other thing, since I've been trying a bunch of test versions, it might be good to have a build date in the About menu just so I could have a way of figuring out which build I'm on.
Yea, "Single color tile" isn't exactly clear, but I couldn't really find any better way to describe it.
Your guess is correct - it does use all of the nametable data, rather than what was used during rendering. This is mostly due to not wanting the feature to have too much impact on performance while the debugger is opened (it's a lot easier/faster to check the ~2000 tiles in nametable+sprite ram at the end of the frame, vs adding logic to every single pixel drawn). It does have its downsides though, especially if palette or CHR banks are switched mid frame (in which case the results are wrong for the top half of the split)
I've fixed the palette/highlight type (only for CHR ROM games) dropdowns so that their selection is saved.
The appveyor builds do have a custom build number on each build for this, but they're not optimized with PGO so they are a fair bit (~20-30%) slower than the builds I compile. Still though, having the build date in general is not a bad idea, regardless of whether it's a dev build or an official release, so I've added it to the about window.
There's also a new overlay on the event viewer to show the x,y position (in terms of scanline/cycle) of the mouse cursor (and be able to easily check which events are on the same scanline, etc.). There's no option to disable it at the moment since I don't think there's a need for one, but if you think it gets in the way, let me know and I'll make it optional.
Build:
https://www.mesen.ca/MesenJul29.zip
I'm attempting to use conditional breakpoints.
I've got a label called "b0" which means "byte zero" and is assigned to $00.
If I put [$b0] in a watchpoint, I can see its value as $0f at this particular point in my program.
If I put [$00] in a watchpoint, I would expect to see the same but I see $00.
For a conditional breakpoint I have tried:
[$00]==15
[b0]==15
$00==15
b0==15
And while I can see b0 take on the value $0f in a watchpoint, the breakpoint never triggers. Not sure what I might be doing wrong? The documentation did seem to be pretty clear about using [ ] to access memory, and the debugger does appear to be seeing my label in the watchpoints but not the breakpoint condition.
GradualGames wrote:
And while I can see b0 take on the value $0f in a watchpoint, the breakpoint never triggers. Not sure what I might be doing wrong? The documentation did seem to be pretty clear about using [ ] to access memory, and the debugger does appear to be seeing my label in the watchpoints but not the breakpoint condition.
I suspect what you want is a
write breakpoint on
$00B0 with a condition of
Value == $0F.
If you're using an execution breakpoint, the condition will be tested before the instruction executes, so if you're trying to catch the instruction that writes $0F to $00B0, I think the
[$B0]==15 condition will only catch an instruction that writes to it once it's already $0F. The
Value condition is "value about to be written" for write instructions, or read for read instructions, which is pretty handy in cases like this.
Similarly, you didn't mention, but if it's an
execution breakpoint it does need an address range to be active, probably $8000-FFFF. In this case you're looking for a write, not an execution, I think, but that can be an issue as well.
rainwarrior wrote:
GradualGames wrote:
And while I can see b0 take on the value $0f in a watchpoint, the breakpoint never triggers. Not sure what I might be doing wrong? The documentation did seem to be pretty clear about using [ ] to access memory, and the debugger does appear to be seeing my label in the watchpoints but not the breakpoint condition.
I suspect what you want is a
write breakpoint on
$00B0 with a condition of
Value == $0F.
If you're using an execution breakpoint, the condition will be tested before the instruction executes, so if you're trying to catch the instruction that writes $0F to $00B0, I think the
[$B0]==15 condition will only catch an instruction that writes to it once it's already $0F. The
Value condition is "value about to be written" for write instructions, or read for read instructions, which is pretty handy in cases like this.
Similarly, you didn't mention, but if it's an
execution breakpoint it does need an address range to be active, probably $8000-FFFF. In this case you're looking for a write, not an execution, I think, but that can be an issue as well.
My label is called b0, that isn't a hex number in that case it is actually the name of $00. I have b0,b1,b2, thru b19. Could the fact that b0 is also a hexadecimal number be confusing the debugger? I wonder... it didn't confuse the watchpoint!
I am using an execution breakpoint. b0 would already have been assigned the value, prior to this particular routine being called.
GradualGames wrote:
My label is called b0, that isn't a hex number in that case it is actually the name of $00. I have b0,b1,b2, thru b19. Could the fact that b0 is also a hexadecimal number be confusing the debugger? I wonder... it didn't confuse the watchpoint!
Hmm, I don't think this can be a problem for conditions, because a hex number without a $ prefix creates a syntax error. It does seem to correctly pick up known labels, for me.
GradualGames wrote:
I am using an execution breakpoint. b0 would already have been assigned the value, prior to this particular routine being called.
If you typed b0 as the address of a
write breakpoint, it would have used $00B0. The address fields don't accept expressions, just hex numbers, AFAIK.
Does a write breakpoint work like expected if you put in the correct address of b0 though?
Oh! Pardon me! I didn't quite understand what you were reporting, but I am noticing really strange behaviour with that too now that I'm looking at it a little more closely.
At first when I tried creating a b0 label and using it for a conditional execution breakpoint it seemed to work, but I'm getting inconsistent behaviour with it now that I'm trying it after power on.
This is what I'm doing do produce the problem. (Am trying it on tepples' 240pee.nes but it should work on any ROM that clears memory at startup I guess.)
1. create label b0 that points somewhere (I made it go to $35)
2. create execution breakpoint on $8000-FFFF with condition "[b0]==$00"
3. power cycle, and debug... breakpoint is never hit?
4. replace [b0] with its actual address [$35], then power cycle and debug... breakpoint is now hit!
5. replace [$35] with [b0] again, but don't power cycle... resuming in the debugger will keep hitting the breakpoint?
So I apologize, you're right, there is something fishy going on there. (It wasn't failing for me at first, but it is in some cases, like this one.)
GradualGames wrote:
If I put [$b0] in a watchpoint, I can see its value as $0f at this particular point in my program.
If I put [$00] in a watchpoint, I would expect to see the same but I see $00.
Are you sure about this bit? I can't reproduce this part of the problem - [b0] or [$00] always output the same (and correct) value as far as I can tell. (Also you wrote [$b0] instead of [b0], but I'm assuming that's just a typo)
And you just found a bug with conditional breakpoints when using labels in their conditions. It'll parse the condition properly, but internally there's an issue that's causing it to act as if the condition field was empty when evaluating the breakpoints. This is because expressions that use labels are not "cached" by the expression parser. This was done because some labels are dynamic in nature (e.g a label on a specific PRG ROM offset could potentially point to $8050 now, and then be out of scope due to bank switching), which is not something that the expression cache can handle (because it replaces the label with its current numeric value when converting the expression to reverse polish notation)
Getting this to work properly with PRG/CHR ROM labels might be somewhat tricky (especially in terms of performance), but I'll see what I can do. At the very least, I could fix it for CPU/PPU address labels easily (e.g like your b0 label) without any performance degradation, since those labels aren't dynamic.
There might be a bit more to this based on what rainwarrior just posted. On my end I only got the "breakpoint is always hit" behavior so far, but I didn't try the retro steps yet - I'll test that part too in case it ends up being a different bug.
Sour wrote:
GradualGames wrote:
If I put [$b0] in a watchpoint, I can see its value as $0f at this particular point in my program.
If I put [$00] in a watchpoint, I would expect to see the same but I see $00.
Are you sure about this bit? I can't reproduce this part of the problem - [b0] or [$00] always output the same (and correct) value as far as I can tell. (Also you wrote [$b0] instead of [b0], but I'm assuming that's just a typo)
And you just found a bug with conditional breakpoints when using labels in their conditions. It'll parse the condition properly, but internally there's an issue that's causing it to act as if the condition field was empty when evaluating the breakpoints. This is because expressions that use labels are not "cached" by the expression parser. This was done because some labels are dynamic in nature (e.g a label on a specific PRG ROM offset could potentially point to $8050 now, and then be out of scope due to bank switching), which is not something that the expression cache can handle (because it replaces the label with its current numeric value when converting the expression to reverse polish notation)
Getting this to work properly with PRG/CHR ROM labels might be somewhat tricky (especially in terms of performance), but I'll see what I can do. At the very least, I could fix it for CPU/PPU address labels easily (e.g like your b0 label) without any performance degradation, since those labels aren't dynamic.
There might be a bit more to this based on what rainwarrior just posted. On my end I only got the "breakpoint is always hit" behavior so far, but I didn't try the retro steps yet - I'll test that part too in case it ends up being a different bug.
Oops, yeah I had a typo with $b0, I meant just b0 because it is a label in my program.
I moved on and was able to find a super gnarly bug in my game by other means of ruling things out and reasoning about my code like usual
Glad I helped find a bug though!
I'm back home this week and I downloaded both sierra/high sierra to test your emulator but... It seems that high sierra disk utility with APFS partition is quite... well.. awful. I never had issues resizing that disk countless time but now I gets error and it just fail to make properly.
Once I figure out the cause, I will give you feedback on the subject. I'm starting to get more scared about the next macOS version.. They are getting bad to worse
Sour, I just upgraded to MesenDevWin-0.9.5.164 from .139 and the lines that appeared on the screen about a month ago have vanished! Guess that was fixed in your .159 update about 8 days ago. Thank you so much for continuing your improvements to your already spectacular emulator!!
It has been a huge blessing to me.
I feel bad for all of the people who don't know about your site on AppVeyor. Are you ever going to release Mesen version 0.9.6 so that many others can benefit from the 164+ fantastic changes you've made to it since March 31st, 2018?
Using any label in conditional breakpoints should now work properly (as of the latest appveyor build). There is a slight impact on performance when using labels vs hardcoded values in conditions, but considering it's unlikely you're using a large amount of conditional breakpoints at once, this shouldn't be too much of a concern.
@Banshaku
Let me know how it turns out - hopefully we can manage to get it running properly on your computer using the SDL windows build. If we manage to get that much working, it should hopefully not be too hard to fix wine's performance issues with the debugger.
@unregistered
I'm not too sure what you're referring to when you say "lines on the screen"? I don't recall seeing or fixing anything like it, at the very least..
As for 0.9.6, I was initially hoping to get it done over the weekend, but I might have to push it off a few days further still.
""Lines on the screen" weren't actual visible extra sprites, but there was like a tic-tac-toe grid of "lines" (like 1 pixel width or height of some background tiles were drawn incorrectly and so it made a visible "line"
). Thought it was because I rearranged some data, but the "lines" vanished after installing .164 and guessed that might be due to your scrolling fix in .159. Nevertheless, the "lines" have been eliminated!!
Thank you Sour!
edit: From what I remember, the "lines" didn't appear when testing with a powerpack on my NES.
edit2: And all of the text on the screen, from my lua scripts, appears crystal clear now too. You continue to do an excellent service for us.
final edit.
Just loaded our game in .139 and the "lines" aren't present and all of the lua text on the screen is crystal clear. So, obviously, you had nothing to do with the changes that I've experienced. Changes I made before trying .164: I just changed a few outdated comments in the code and added the new mesen into the folder (and renamed the newest mesen I have "mesen.exe" and renamed the .139 mesen back to "MesenDevWin-0.9.5.139.exe"). God blessed our game again!!
Sorry for my bogus report to you Sour. Glad you are going to create 0.9.6.
No worries. FYI, the issue with the Lua text being unclear is usually caused by using blargg's NTSC filter. Due to the way things are internally, I have no simple way of making the text appear properly when using that one particular filter at the moment. I'm guessing that's probably what was causing issues on your end.
I finished to install Sierra/High sierra on a different partition and here's the result of my test
[Sierra]
- Windows version
- works but screen scaling issue
- debugger is sluggish
- SDL version (from link in thread)
- work with screen normal size, refresh seems sluggish
- debugger is slugish
[High sierra]
- Windows version
- works but screen scaling issue
- debugger seems more sluggish than sierra
- SDL version (from link in thread)
- work with screen normal size, refresh seems sluggish
- debugger seems more sluggish than sierra
The good news is that is it working on high sierra with wine 3.0.2, dotnet45 but it is slow. The bad news for me is that I cannot make it work on my main partition and I don't know why
I will keep both partition for now and can test some builds to see if it becomes faster. For now, in it current state, compared to when I'm using it on a latest machine with linux or a 10 years old pc on Windows, it's too slow to be usable. Only testing different build will allows us to figure out performance issues.
This ends my report.
Thanks for taking the time to check all of this. Good to know it actually runs on your new installation, hopefully you can figure out what's causing it to crash on your regular setup.
I can reproduce essentially the same performance issues on Linux when I run the Windows SDL build using Wine, so that at least gives me a relatively decent way to figure out what's causing the slowdowns through trial and error. Is it mostly just the debugger tools that are slow on your end? I seem to recall the regular configuration dialogs working relatively well under Wine.
The normal version, since the scaling is broken, it is hard to judge if the speed is normal but on the SDL version, even though it shows 60 fps, the refresh is not smooth compared to other platform. If I can get the win version to work properly then I could confirm if the same problem occur.
I remember that you mentioned a parameter to start the emulator but I do not think it was for fixing that issue, isn't it? I cannot find it back in this thread (or was it in another thread).
Looking back at the code for the windows SDL version, I had set it to use software rendering because hardware rendering refused to work, but that might have been specific to my macOS VM. I'll have to try setting it to hardware rendering again to see. Although arguably it might be easier to try to fix the aspect ratio under Wine with the DX11 build (since it would be much easier for me in terms of code/maintenance)
I just don't know why it does that either on wine. I think on my linux pc I used mono with with the windows version and everything was fine. I was not able to run mono properly on the mac so I may try again since the performance was better on linux so hopefully I may see the same thing on the mac performance wise.
I just released
0.9.6 earlier today. There are over 300 commits in this release, which probably makes it the largest release since 0.1.0 in terms of commits (so it's a bit hard to give a full list of everything that's been done). Debugger-wise, there's been a lot of small additions/improvements/bugfixes in a lot of the debugging tools. Linux-wise, this build should run the emulation ~20-30% faster than the official 0.9.5 release thanks to PGO, and a lot of debugger UI issues have been fixed, too.
Also, I've recently discovered that Mono has a tool to produce a native Linux binary from a .NET executable and it actually seems to work properly as far as Mesen goes. This means it would be possible to release a Linux-only 64-bit build that's only about 10-15mb in size and does not require Mono to be installed at all (its only dependency would be SDL2), which would make using Mesen easier for first time users (and would avoid forcing people to download the entire mono package)
Banshaku wrote:
I think on my linux pc I used mono with with the windows version and everything was fine
The official build actually contains both windows & linux binaries - so running it on Mono on Linux will run the native Linux SDL build, rather than the DX11 build. The scaling issue is most likely a DX11 issue w/ Wine, but maybe I can find a workaround for it.
Mono Winforms on macOS is horribly broken as far as I can tell, so it's probably not worth the effort trying to get it to work on mono on macOS at the moment (until winforms support is fixed on macOS, if ever)
I see. I will try the new build on linux and see how fast it is.
As for Mono on mac, yep, they say it when you start an app about carbon version, 32 bits etc so I tried on both sierra version and it fails in both cases. I'm only using mac os because of bash and to backup the phone so I'm starting to get less and less interested in the os now that I'm not using it for work. I tried to remove homebrew, zap the wine folder, reinstall everything related and still doesn't work. I'm not in the mood to reinstall the os for now so I think I will just keep an extra partition to give feedback on how the latest version works on mac.
I could install linux on another partition but the keyboard layout is a pain, some part doesn't work (the camera wasn't years ago, maybe it works now) so unless I always use with an external keyboard, I don't know how comfortable it would be. But this is my personal problem, not really mesen related
It's butter smooth on and old core 2 duo so I may start to use windows with the linux subsystem for now.
Crash report:
Open N163TEST.NES from this post:
viewtopic.php?p=222711#p222711After it starts running, go to Debug menu and open Debugger. Will crash immediately.
A screenshot of the crash window is attached. After clicking OK Mesen is unresponsive and has to be force closed.
Thanks - this is because the NES 2.0 header gives it 128bytes of work ram, and Mesen only has 256-byte granularity on these sort of things. This ends up mapping null pointers to a section of ram and causes a crash in the debugger.
After changing the header to say no work ram (I thought a discussion a long time ago had concluded that the internal RAM for this mapper shouldn't be counted in the header? Unsure.), it boots, but then the ROM assumes that code will be available at $8000 without initializing the PRG banking register for it, so Mesen leaves that portion as open bus and doesn't boot properly. I guess I could change this to select random banks instead of open bus, but unless that test contains the same 8kb bank copied 4 times over, it won't work reliably still.
Sour wrote:
it boots, but then the ROM assumes that code will be available at $8000 without initializing the PRG banking register for it, so Mesen leaves that portion as open bus and doesn't boot properly. I guess I could change this to select random banks instead of open bus
How do you boot games using mapper 66 (GNROM), 11 (Color Dreams), 34 (BNROM), or 7 (AOROM)?
Those will select a random bank and boot from there - but then that makes sense for those mappers. I guess you could argue that someone using the Namco 163 could set the reset vector to $8000 and copy the reset code to all banks (e.g like this rom most likely does), but in practice this potentially wastes a lot of PRG space, so I imagine nobody ever did this back then.
I usually kept these kinds of scenarios as open bus where possible to avoid homebrew devs forgetting to initialize the banking registers (or relying on a power on state that doesn't exist on hardware), but I guess I could just make them random instead.
Sour wrote:
I usually kept these kinds of scenarios as open bus where possible to avoid homebrew devs forgetting to initialize the banking registers (or relying on a power on state that doesn't exist on hardware), but I guess I could just make them random instead.
I thought it was already randomized if you used the Emulation > Advanced > "Randomize power-on state for mappers" option?
rainwarrior wrote:
I thought it was already randomized if you used the Emulation > Advanced > "Randomize power-on state for mappers" option?
The current implementation only does it for some (~20) mappers - but now that I think about it, I could easily randomize all the banks that are currently left as open bus (whether that option is enabled or not). That would likely cover the vast majority of mappers (at least those likely to be used in homebrew dev or rom hacks), would technically be more accurate than leaving it as open bus, and would allow cases like this test rom to work properly.
Sour wrote:
After changing the header to say no work ram (I thought a discussion a long time ago had concluded that the internal RAM for this mapper shouldn't be counted in the header? Unsure.)
No, the internal work RAM should be specified, and
must be specified for games that battery-back only the internal RAM and use it for save-game purposes, such as Mindseeker. The only time that the internal work RAM is not specified is when there is both internal work RAM and 8 KiB of WRAM and both or neither are battery-backed, to prevent the non-power of two size from having to round up. For all these games, such as Megami Tensei 2, it's also possible to denote the 128 byte of work RAM as non-battery-backed (since the games will not use it for save game data but for sound or not at all, which I verified with every single game), and the 8 KiB of WRAM as battery-backed. And you need to specify the 128 byte of battery-backed EPROM in Mapper 159 as well.
I have modified the N163TEST rom to run from $E000. Mesen's N163 emulation does not seem tor emulate the situation in which only one channel is enabled accurately, though, as I cannot hear anything in that situation.
I've been getting quite a few crashes on Linux/mono ever since upgrading, to the point where it's pretty unstable. Attached is a sample crash report (this time from me pressing Ctrl-T to reload the game, but I haven't seen a reliable pattern for when the crashes occur).
This is with the 0.9.6 github build.
Thanks!
NewRisingSun wrote:
I have modified the N163TEST rom to run from $E000. Mesen's N163 emulation does not seem tor emulate the situation in which only one channel is enabled accurately, though, as I cannot hear anything in that situation.
Thanks! The sound output was in fact disabled when only 1 channel was enabled - this should be fixed. The test rom appears to work properly now, but I haven't compared with other emulators. I also fixed the crash that the 128 bytes work ram header was causing.
gauauu wrote:
I've been getting quite a few crashes on Linux/mono ever since upgrading
That crash seems to imply that this is caused by Mono crashing while trying to process the regular expressions I use to read the .DBG files, which is very odd. Most of the other threads in the process seem to be paused, so it seems unlikely that they would be causing issues, either.. Can you try disabling the "auto-load .DBG files" option to see if the crashes stop?
I just tried getting it to crash on Ubuntu 18 by power cycling on a ROM with a .DBG file but couldn't. I'm assuming you haven't upgraded mono, either?
Sour wrote:
Can you try disabling the "auto-load .DBG files" option to see if the crashes stop?
Yeah, it hasn't crashed since I disabled that.
Quote:
I'm assuming you haven't upgraded mono, either?
Not that I know of, although I blindly click "update all" when the package manager says there's updates, so....it's possible that I have without noticing.
What version of mono are you running? (I have 4.6.2 on my Ubuntu 18 VM)
How easy is it to get it to crash? e.g if you just keep power cycling the rom with the debugger opened, does it eventually crash?
From what I read, this particular crash seems to be caused by Mono's garbage collector crashing for an unknown reason. It's possible that Mesen's C++ code is somehow corrupting memory and causing Mono to crash, though, but it is somewhat unlikely..
Sour wrote:
What version of mono are you running? (I have 4.6.2 on my Ubuntu 18 VM)
Mono JIT compiler version 4.2.1 (Debian 4.2.1.102+dfsg2-7ubuntu4)
Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors.
http://www.mono-project.com TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: sgen
Quote:
How easy is it to get it to crash? e.g if you just keep power cycling the rom with the debugger opened, does it eventually crash?
Yesterday it was doing it every 5th or 6th time I power cycled, and occasionally doing other things. Today (even with re-enabling auto-loading .dbg files) it hasn't crashed while I've done just silly things like spamming the power cycle button. I'll leave it turned back on and see what happens.
Today I was just clicking around in the CHR tab of the PPU viewer when it crashed. Dump attached. If you'd rather I put these on github, just let me know.
Thanks!
Sorry gauauu for interrupting your conversation with Sour.
Sour, after installing 0.9.6 the last two trace log files I've made today are extensionless. Why? It was changed from .log to .txt and that was ok, but now, when opening new trace log files, I have to select notepad from a list and resize notepad's window and change the font. Either that or I must type ".txt" at the end of every new trace log filename, I guess; haven't tried that yet. So I was wondering, would it be terrible to add a blank and a .txt and a .log to the empty
Save as type: menu that's underneath the
File name: menu? It seems to me that Windows 10 can remember
Save as type: choices.
I'm asking you that because I don't understand anything about creating programs that run on many different system types.
gauauu wrote:
Today I was just clicking around in the CHR tab of the PPU viewer when it crashed.
This looks like its essentially the same problem (mono crashing while trying to allocate managed objects). Could you try running an older 0.9.5 dev build (e.g one you were using before, or one from the appveyor builds) to see if the crashes really don't happen with it?
Or maybe it only happens when you have specific tools opened? e.g this crash implies you had the PPU viewer opened.
unregistered wrote:
Sour, after installing 0.9.6 the last two trace log files I've made today are extensionless.
I think this is probably because you manually changed the output file's name? It won't add the .txt suffix automatically because the "file type" dropdown is empty. As far as I can tell though, it's been this way ever since I changed it from .log to .txt and let users choose the filename, back in March 2017. It's essentially caused by a typo in the code - the latest commit fixes it.
Thank you so much Sour!!
I guess, I must have done something wrong like you said. The latest commit does add the .txt to the
Save as type: menu box!
Now, I know that a "commit" is your
AppVeyor release.
edit: note:
commit history page click on commit you want, then click on "ARTIFACTS" button to receive download link. (The latest commit is built from the previous successful commit, I think
) Feel free to delete this edit if you want to.
Sour wrote:
Could you try running an older 0.9.5 dev build (e.g one you were using before, or one from the appveyor builds) to see if the crashes really don't happen with it?
I've swapped back to the old build and no crashes yet. I'll let you know if that changes.
I tested Super Mario Bros with Mesen, run the character at max speed and observed 3,4 screen glitches just in the first level 1-1.
Tested with normal window and 2x scale window. My PC is Virtual Reality Ready. It seems to me that there are frame drop here and there.
Is this Mesen's normal behavior?
I'm not too sure what you mean by screen glitches. If you just mean that some frames are skipped, then yes, that's possible, especially if you don't have vertical sync turned on.
Edit: Also, since the NTSC NES runs at 60.1fps, turning vsync on will still result in a dropped frame every 10 or so seconds under ideal conditions.
Maybe you want to emulate the CPU at 341.5*262*4*60 = 21441960 Hz instead of 39375000*6/11 Hz?
There is an option for this already (in the video options), it will run at exactly 60.0/50.0fps when turned on, rather than the standard ntsc/pal frame rates.
Sour wrote:
There is an option for this already (in the video options), it will run at exactly 60.0/50.0fps when turned on, rather than the standard ntsc/pal frame rates.
Dude, you deserve a
for that option.
After enabling both 60fps and vsync, the glitch do disappear.
Thank you!
I tried using Mesen to debug an NSF and ran into a few problems. I realize NSF debugging probably isn't a priority (probably very few people do this) but here's my report if you want to take a look:
1. Execution starts in some internal code at $3F00. This puts an added burden of trying to understand this internal code from disassembly (would be better if it starts by breaking at INIT, or during PLAY... ideally the internal code would be a hidden implementation detail.)
2. Parts of the address space aren't viewable in disassembly and don't seem to show up properly in debugger. E.g. near the end of the internal code is JMP ($3E00) but $3E00 resolves to 0. Trying to look at $3E00 in the disassembly shows a gap in the address space there.
3. Banks in the memory display at the bottom seem to perpetually say N/A.
4. I imported a dbg file but all of the labels were $70 higher than they should have been? (I don't know if it's a coincidence that an NSF header has $80 bytes and an NES header has $10?)
5. I had some stuff in a segment with a load address different from its run address (i.e. copy to RAM before running). The address for this stuff was transposed from ~$400 to ~$8070. (That might be its load address +$70 but only the run address is really useful.)
6. Variables directly in RAM seem to be placed correctly.
7. I can't select a range of labels and delete them in the label panel. (Delete is disabled when a range is selected?)
8. Being able to drag a .dbg file onto the debugger might be a nice feature to have.
I've attached the NSF and its DBG file that exhibit this behaviour. The source code for this is here, if needed:
github revision (building it is a bit of a process, but nsf.s ramp.s and nsf.cfg have relevant info)
Thanks for the feedback!
The latest appveyor build should fix all of these except 1 & 2 (which are a bit trickier to fix). I think the simplest solution for those would probably be to add "Break on Init" and "Break on Play" options and disable the "Break on power on / reset" option for NSF files? This would make the code at $3F00 more or less invisible, except for the infinite loop between IRQs. I'll take a look when I have some more time.
I've done some rework of the way the current bank was kept internally which means the CPU/PPU memory mappings should now properly work for the vast majority of mappers (only exceptions being mappers that have special nametables or that map CHR rom/ram to $2000+). The PPU mappings also show different colors for CHR ROM vs RAM now (green vs blue).
Let me know if you have any other issues debugging NSF files.
Sour wrote:
The latest appveyor build should fix all of these except 1 & 2 (which are a bit trickier to fix). I think the simplest solution for those would probably be to add "Break on Init" and "Break on Play" options and disable the "Break on power on / reset" option for NSF files? This would make the code at $3F00 more or less invisible, except for the infinite loop between IRQs. I'll take a look when I have some more time.
Break on init/play would do the job pretty well, I think.
Sour wrote:
Let me know if you have any other issues debugging NSF files.
The labels look perfect for the NSF now. Drag-and-drop DBG is nice too.
However... NSFe files seem to have broken label addresses. I probably should have checked this too. (The RAM code labels appear fine, it's only the ones that point at ROM code?) Given that the issue with NSF seemed coincidentally related to the header size, maybe this is related to how NSFe can put its DATA chunk at a variable location in the file? (I know NSFe debugging is an even more obscure need.
I think Mesen is actually the only debugging emulator that even plays NSFe.)
Example attached (basically the same thing in NSFe form.
source).
Yep - same issue for NSFe files (the position the PRG starts in the file has an impact on the import logic).
It should be fixed now - .nes/nsf/nsfe files should all work properly. UNIF won't work though, and since UNIF files can be split into multiple prg chunks, it would be harder to add support for it (and I doubt anybody making CC/CA65 projects is building UNIF files anyway?). FDS files won't work either, for obvious reasons.
I'll try to look into adding break on play/init next weekend.
Unrelated, but something I had noticed a long time ago, forgotten about, and just noticed again when loading your NSFe file - wouldn't it make sense for the NSFe spec to either specify the encoding for all strings (e.g in a separate chunk), or state in the spec that NSFe files (and ideally NSF too) should all use UTF-8? As is, there is no real way to guarantee the track names/artist/etc are displayed properly (NSF files have the same problem with the artist/etc fields). e.g in your NSFe, you encoded the é in Prélude as it would be in CP-1252 (it also seems to show up as "Pr|lude" somewhere else in the file). Mesen interprets all the strings as UTF-8, which makes the Prélude display incorrectly in this case.
Sour wrote:
Unrelated, but something I had noticed a long time ago, forgotten about, and just noticed again when loading your NSFe file - wouldn't it make sense for the NSFe spec to either specify the encoding for all strings (e.g in a separate chunk), or state in the spec that NSFe files (and ideally NSF too) should all use UTF-8? As is, there is no real way to guarantee the track names/artist/etc are displayed properly (NSF files have the same problem with the artist/etc fields). e.g in your NSFe, you encoded the é in Prélude as it would be in CP-1252 (it also seems to show up as "Pr|lude" somewhere else in the file). Mesen interprets all the strings as UTF-8, which makes the Prélude display incorrectly in this case.
The é in that case wasn't any deliberate attempt at encoding it that way, it's just how that string was generated by default with the tools I was using. The | elsewhere is unrelated (that's intentional, intended for another purpose not used by the NSF).
It's kind of hard to specify a standard for text in NSF. It was never initially specified but CP-1252 is common. Shift-JIS is even more common, I think. Most are plain ASCII though. (Maybe an upcoming NSF2 spec could finally standardize on UTF-8?)
NSFe I
think we could get away with retroactively specifying UTF-8, but only because it's a very underused format. I haven't really encountered any collection of NSFe except the one Disch made when he created the format.
Most NSF players just display in local ANSI, I guess. I think you'd be hard pressed to find any that support unicode, actually. NSF is a thing that long predates the big switch to unicode. NSFPlay traditionally displayed in whatever the local ANSI is. If you run it on windows in Japanese mode, it'll display Shift-JIS.
rainwarrior wrote:
It's kind of hard to specify a standard for text in NSF. It was never initially specified but CP-1252 is common. Shift-JIS is even more common, I think. Most are plain ASCII though. (Maybe an upcoming NSF2 spec could finally standardize on UTF-8?)
Any new spec should definitely use UTF8, anything else makes it harder to manage these days, for essentially no benefit. Even zip files are plagued with the same issue, the file names have no standard encoding. :\
Looking at NSF2's spec, it does already say that strings should be in UTF8:
Quote:
All strings are UTF-8; players should indicate an error for characters that they cannot display.
Although it doesn't explicitly say that the name/artist/copyright fields should also be UTF8.
I guess NSF2's only benefit vs NSFe was that it was backward-compatible with NSF? I imagine this was mostly for the sake of backward compatibility with hardware players that cannot be updated or are no longer maintained. (Otherwise I'd argue it'd be simpler to expand NSFe)
NSF2 is an unimplemented "speculative" spec. There's no publicly available rips in this format, or publicly available devices or emulators that use it*. Nothing in there is established yet. I wouldn't recommend trying to implement it.
Though, to be honest, NSFe can easily be extended to add everything wanted for NSF2, so I'm not sure we even need it. If you want a backward compatible format we already have it. Anything that would need NSF2's real features isn't going to be backward compatible anyway.
My plan is to implement something like the NSF2's proposed features for NSFPlay (the version on the wiki is a bit out of date with subsequent discussions about it). I'll enable them with a new NSFe chunk. Maybe I could also harden up and release a revised "NSF2" but I don't really know if there's any necessity to make use of that idea.
(* There is one rip of Battletoads and some corresponding partial test implementation to support it in Nintendulator.)
rainwarrior wrote:
Most NSF players just display in local ANSI, I guess. I think you'd be hard pressed to find any that support unicode, actually.
Most linux-y things will print unknown text using UTF-8 nowadays.
rainwarrior wrote:
Shift-JIS is even more common, I think. [...] If you run it on windows in Japanese mode, it'll display Shift-JIS.
Specifically the "MS932" variant.
One more thing about NSFs: if using PAL region the INIT should have 1 passed for X, not 0.
Sour wrote:
FDS files won't work either, for obvious reasons.
I understand that banking code in and out would be complicated, since the regions FDS files are loaded to are arbitrary (though at least they're specified in the file itself), but for the moment if I were to create an FDS that does not need overlapping symbols, could this not work just as code from RAM already does?
lidnariq wrote:
Specifically the "MS932" variant.
Thanks. This will be helpful when I try to write code to automatically detect it in old NSFs.
rainwarrior wrote:
NSF2 is an unimplemented "speculative" spec.
One of us should ping the IRC channel to see if kevtris or anyone else has been making use of it.
rainwarrior wrote:
to be honest, NSFe can easily be extended to add everything wanted for NSF2
Including playback on things like a PowerPak or similar, with the same NES handling the loading, playback timing control and track selection, and actual execution of the audio driver? I think that's why kev proposed NSF 2 instead of NSFe, as it'd be a pain to decode the chunked format in 6502 assembly.
tepples wrote:
rainwarrior wrote:
to be honest, NSFe can easily be extended to add everything wanted for NSF2
Including playback on things like a PowerPak or similar, with the same NES handling the loading, playback timing control and track selection, and actual execution of the audio driver? I think that's why kev proposed NSF 2 instead of NSFe, as it'd be a pain to decode the chunked format in 6502 assembly.
1. It's
not a pain to decode the chunked format in 6502 assembly. This is a false premise. (...also proposed NSF2
is a chunked format for all data past the main ROM.)
2. The PowerPak is not going to magically grow the other NSF2 features just because it's backwards compatible.
It'd actually be relatively easy to add NSFe support to the PowerPak. It could use the same player with a different loader. The reason it doesn't already support it is just that it's a minority format. (...as opposed to NSF2 which is a non-existent format.)
Ahh, I'm sorry we're having this pointless digression in your Mesen thread, Sour.
rainwarrior wrote:
Sour wrote:
FDS files won't work either, for obvious reasons.
I understand that banking code in and out would be complicated, since the regions FDS files are loaded to are arbitrary (though at least they're specified in the file itself), but for the moment if I were to create an FDS that does not need overlapping symbols, could this not work just as code from RAM already does?
Here's an example to try this out. This is a build of my
minimal ca65 example for FDS with dbg and source.
Mesen appears to correctly load the symbols with the right addresses. The addresses for RAM < $800 appear inline perfectly. Labels for $6000+ seem to have been loaded with the correct values (listed as RAM in the address column too), but they are never shown in the disassembly?
rainwarrior wrote:
Ahh, I'm sorry we're having this pointless digression in your Mesen thread, Sour.
No worries, not the first time it happens! :p
rainwarrior wrote:
Labels for $6000+ seem to have been loaded with the correct values (listed as RAM in the address column too), but they are never shown in the disassembly?
Labels for "NES RAM" are actually meant to be limited to the 2kb of internal ram and are only used if a value below $2000 is specified (I've added bound checking to the edit window so this can no longer be entered manually)
For the FDS, the issue is that the information in the .dbg file isn't really very precise when it comes to the contents of work/save RAM (especially if bankswitching is involved). That being said, I've added some code that should be able to get work/save ram labels to work properly in typical cases (no bankswitching, etc.). For FDS, it should also work correctly, so long as you're not reusing the same parts of work ram in different files (your example seems to work fine).
I also added the Break on Init/Play options for NSF files.
rainwarrior wrote:
One more thing about NSFs: if using PAL region the INIT should have 1 passed for X, not 0.
It looks like I broke this a long time ago when fixing the playback speed when Dendy timings are selected. Haven't fixed it just yet since I was wondering, what should this return for Dendy timings? Would returning 1 like for PAL be good enough? Or would it make more sense to have another value for Dendy (e.g X = 2)?
Sour wrote:
rainwarrior wrote:
One more thing about NSFs: if using PAL region the INIT should have 1 passed for X, not 0.
It looks like I broke this a long time ago when fixing the playback speed when Dendy timings are selected. Haven't fixed it just yet since I was wondering, what should this return for Dendy timings? Would returning 1 like for PAL be good enough? Or would it make more sense to have another value for Dendy (e.g X = 2)?
NSF has never had a Dendy spec. 2 is "undefined behaviour"... however...
I did actually decide yesterday that I should add a nes NSFe chunk to address this:
http://wiki.nesdev.com/w/index.php/NSFe#regnIf unspecified, I think "normal" expected Dendy behaviour is X=0 but use PAL speed. (i.e. NTSC version of tunes played too slowly). That's really what most Dendy games would do.
(Most NSF rips are single platform only and ignore X as a parameter anyway. With homebrew, especially Famitracker, X does often matter, though, and there an unexpected X=2 may cause problems. Better to use X=0 by default there than throw something new they weren't written for.)
The added NSFe chunk will accomodate stuff made specifically for Dendy, though, in which case if that's specified pass X=2.
Another bug report: I get a division by zero error when I open the debugger for this one.
Edit: Ah! The problem is I have no CHR-ROM but used iNES 2 header to specify 0 CHR-RAM. So, I made a bad header, but it does crash the emulator too.The ROM itself doesn't do anything visible at the moment (should just display the blank screen), but the source code is here if useful:
github link (once again, a bit convoluted to build it though, not recommended unless needed, ask me if you need more intermediary files from it)
Attachment:
mesen_divzero.png [ 27.94 KiB | Viewed 4093 times ]
As far as I can tell this error doesn't have to do with the DBG file? It seems to have this problem even if I clear the workspace and get rid of the DBG.
Also, to follow up on the previous changes (since I'm trying the latest appveyor build): the FDS example seems like it's working very well! NSFe also appears to work. The INIT and PLAY gotos are appreciated.
What can I do as an user to improve Mesen's performance? My computer is Asus UX32VD, i5 version, but Mesen barely uses my CPU (there's free cpu time and by no means it's hitting 100%). The problem I'm facing is that emulator randomly drops to 40-58 FPS range (most of the time it's 55fps) which leads to audio screeching. I've tried to increase audio buffer, but it had no effect. This happens even when I'm using no filters whatsoever.
Important fact:
Asus UX32VD is a laptop. It also contains two GPUs: Intel HD 4000 (i.e. IGP / integrated graphics into the CPU), and an NVIDIA GT 620M. I don't know which of those GPUs is actually handling the graphical drawing, but it likely matters. Neither GPU is particularly "powerful" compared to a desktop, but that brings into question what DX11 features are being used by Mesen (I have to assume you're using Windows 7 Home, which is the default OS that the laptop comes with).
A suggestion: the current PC should always be assumed to be the start of an instruction for disassembly.
Here's a screenshot where a data table appearing right before a function breaks the disassembly of that function:
Attachment:
debugger_pc_not_disassembled.png [ 101.08 KiB | Viewed 4044 times ]
So, I guess the "correct" display in my view would break up the previous instruction (and show its disassembly as data), and begin a new instruction where the PC is.
There's also a defined symbol on the line at that PC too, which is being hidden because of this. (I might even suggest as an option that all defined debug labels should cause the disassembly to realign on that point.)
Though, being able to realign the disassembly manually is very useful on its own, like if I want to go browsing other areas of the code (especially if looking at a game I don't have debug symbols for). In FCEUX you can manually set the starting address of the top of the disassembly window, but the current PC is always placed there as well, which is kind of two birds with one stone, but it's a bit of a simpler implementation (seeing code above and below in MESEN is definitely a good feature).
The code in question looks like this:
Code:
sprite_title_cursor: .byte 0, -1, $0C, $00
.byte 7*8, -1, $0C, $40, 128
title_redraw:
jsr sprite_begin
lda title_pos
asl
...
koitsu wrote:
Important fact:
Asus UX32VD is a laptop. It also contains two GPUs: Intel HD 4000 (i.e. IGP / integrated graphics into the CPU), and an NVIDIA GT 620M. I don't know which of those GPUs is actually handling the graphical drawing, but it likely matters. Neither GPU is particularly "powerful" compared to a desktop, but that brings into question what DX11 features are being used by Mesen (I have to assume you're using Windows 7 Home, which is the default OS that the laptop comes with).
I'm on Win10. And GPU doesn't matter for a NES emu. I'd rather worry about other settings related to emulation itself. I have no such issues with other emulators such as FCEUX or Nestopia. Again, Mesen barely uses my CPU/GPU and I've disabled integrated GPU, so it's uses nVidia chip.
//edit: Haven't looked into sources yet, but if debugger is active even when you don't open it (i.e. when just playing games and not using debug functions), that may be a slow down factor.
darkhog wrote:
WMy computer is Asus UX32VD, i5 version, but Mesen barely uses my CPU (there's free cpu time and by no means it's hitting 100%)
If you set the emulation to "maximum speed", how fast can it go? If it can go above 60fps at maximum speed but drops below 60fps when set to 100% speed, something is wrong.
Are you running off the battery, or plugged in? Is Windows set to use one of the "low battery usage" performance profiles (e.g Balanced, etc.), or one of the high performance ones?
Like you guessed, the video card doesn't really matter, and Mesen doesn't make much use of DX11 features, DX11 is required mostly because I use Microsoft's Direct Toolkit library to simplify a few things.
The debugger is disabled unless one or more of the debugging tools are opened (opening any of them will typically drop performance by 20-40%).
rainwarrior wrote:
Ah! The problem is I have no CHR-ROM but used iNES 2 header to specify 0 CHR-RAM. So, I made a bad header, but it does crash the emulator too.
Thanks, seems to be linked to the changes I did to improve the CPU/PPU memory mapping display at the bottom, I'll take a look.
For the disassembly issue, this shouldn't ever happen, unless the CDL log is invalid (e.g parts that are not code have been marked as code). I'm assuming this is with a .dbg file loaded? I'm using the data from .dbg files to generate proper CDL data when they are loaded (which in turns means all the code is disassembled properly from the start), but it's possible that in this case the logic I'm using is flawed. If you can send me the .dbg file, I might be able to fix it.
Realigning the disassembly would be a nice thing to have, but I can't see a simple way of doing this at the moment, given the way the disassembly works. Essentially, the disassembly relies on the CDL info entirely, and the entire memory space is disassembled on every step. Using labels to force the disassembler to realign itself is definitely something worth exploring though.
It goes to about 200fps on maximum speed. I'm plugged in (using the computer as a workstation rather than a laptop as I can't afford a real pc at the moment) and not in power saving mode.
//edit: Do you plan adding support for the .deb files (debug symbols used by fceux)?
darkhog wrote:
It goes to about 200fps on maximum speed. I'm plugged in (using the computer as a workstation rather than a laptop as I can't afford a real pc at the moment) and not in power saving mode.
//edit: Do you plan adding support for the .deb files (debug symbols used by fceux)?
I think I may know what's causing it to run slower than 60fps at times. Essentially if your PC wakes up too late from the sleep calls used to regulate the speed, it never tries to make up for that lost time and the code can end up running at less than 60fps. If that's what's causing the issue, this build should hopefully fix it:
https://www.mesen.ca/MesenRunSpeedFix.zip Let me know if this build changes anything on your end.
As for .deb files, I had never actually taken a look at them before, but they appear to be binary files, and only contain the breakpoints/bookmarks and a few other options - none of these can really be imported into Mesen reliably. The labels are in the .nl files, which can't be imported either, but it might be possible to add support for those, but I'm not sure they can be perfectly mapped to Mesen's way of managing labels, I'll take a another look at this when I get a chance.
Sour wrote:
Essentially if your PC wakes up too late from the sleep calls used to regulate the speed, it never tries to make up for that lost time and the code can end up running at less than 60fps.
Now I *have* to ask how this is implemented. My face right now is pretty much :shock: . This doesn't sound right at all, but at the same time, there are several things in my head that see the need for sleeping.
Maybe separate thread material...
There isn't much to say really - Mesen uses high resolution timers to keep track of time and sleeps between each frame to limit the FPS to 60.1 or 50.0 depending on NTSC/PAL.
Looking at it again now (4+ years after I initially wrote this), there's a tweak or 2 I should probably make to make it more robust/precise, but it's better than trying to rely on vertical sync (especially since the rendering is completely detached from the emulation core), etc.
Syncing off of actual/real VSync (i.e. monitor VSync) I don't think would work any more, especially considering 120Hz, 144Hz, and 240Hz monitors. Some monitors even advertise via
EDID a sync rate of 59.94Hz and Windows 7 in its infinite wisdom and glory
rounds that *down* to 59Hz. I don't get the impression anything can do that reliably any more in Windows-land. :\
A common technique used to be to sync off of an audio rate (don't ask me how this works, but it's used in several major SNES emulators), otherwise things like A/V could get out of sync somehow.
James, the guy who did
nemulator, talked for a while here about how he did his synchronisation. He
ran into problems with his initial approach due to things like CPU frequency throttling (ex. Intel SpeedStep), C1-C4 CPU-level sleep states, etc. (also see blargg's responses!) -- and rather than do the awful thing of
changing people's Power Profile Settings in their OS, he reinvented his method
described here, and it seems to work remarkably well. He did it in SDL as well.
I guess using sleeps works (you certainly should not keep the CPU running at 100% all the time! Yes some emulators do that, gahhh!), but I was always under the impression that in Windows, like other OSes, you could essentially tell the kernel "I want you to execute this thing (pointer to code) at an interval of N {milliseconds,microseconds,some interval}".
Sour wrote:
For the disassembly issue, this shouldn't ever happen, unless the CDL log is invalid (e.g parts that are not code have been marked as code). I'm assuming this is with a .dbg file loaded? I'm using the data from .dbg files to generate proper CDL data when they are loaded (which in turns means all the code is disassembled properly from the start), but it's possible that in this case the logic I'm using is flawed. If you can send me the .dbg file, I might be able to fix it.
Realigning the disassembly would be a nice thing to have, but I can't see a simple way of doing this at the moment, given the way the disassembly works. Essentially, the disassembly relies on the CDL info entirely, and the entire memory space is disassembled on every step. Using labels to force the disassembler to realign itself is definitely something worth exploring though.
An NES + DBG file is attached. (
github source) A similar example to above can be found at the label "menu_title_redraw".
I also tried to see if I could "manually" realign by right clicking and trying to assign code vs. data but couldn't really work out how to change anything. Lines got highlighted differently but the disassembly never changed. (Is manual CDL editing possible?)
Another unrelated thought: should F9 breakpoints on banked ROM maybe add a bank condition by default?
rainwarrior wrote:
An NES + DBG file is attached. (
github source) A similar example to above can be found at the label "menu_title_redraw".
Thanks - it's actually an issue when turning on the options to disassemble verified data and/or unknown data/code. Without those enabled, I get this:
Attachment:
debug.png [ 10.79 KiB | Viewed 3928 times ]
In this case, if you turn off the "disassemble verified data" option, it should fix itself. It shouldn't be too hard to force the disassembly to realign itself with the verified code when it transitions from one type to the other, though - I'll take a look tomorrow.
Quote:
Is manual CDL editing possible?
It is, with the "mark selection as" options (in either the code window or hex editor), but in your case, since you've set it to disassemble everything, I think it'll end up with the exact same disassembly either way. (Also, when using .dbg files, any changes you'd do to the CDL log would be overwritten the next time the .dbg file is loaded)
Quote:
Another unrelated thought: should F9 breakpoints on banked ROM maybe add a bank condition by default?
It does already, in a way. Pressing F9 adds a breakpoint to a specific offset in PRG ROM (assuming you do it on a row that's currently mapped to PRG ROM, that is), not to a specific CPU memory address, so the breakpoint will only trigger for that specific bank.
koitsu wrote:
A common technique used to be to sync off of an audio rate [...]
Syncing to the audio has some benefits (such as easily avoiding audio pops and crackling), but then your actual FPS will end up varying from one system to the other, and NTSC filters will flicker more if the FPS is too far away from the monitor's refresh rate, etc.
Mesen does do something similar to nemulator (changing the audio's sample rate to keep the video/audio in sync) - I'm not too sure what nemulator syncs itself against, though (e.g vsync or a timer). The only thing Mesen does at the moment is force the Windows-wide timer resolution to 1ms (only while the emulation is actually running), to help reduce the chances of sleeps being way off target (default resolution is 16ms, which is pretty horrible).
It's possible there are some reliable ways to setup a periodic callback, but I'm not overly familiar with the Win32 API (I'm more of a Javascript/C# person :p) - but using something like that wouldn't technically result in any tangible benefit, I think, the OS would probably end up doing something very similar to what I'm currently doing with sleeps.
Funny A/V sync story: my laptop misreports both video
and audio rates. The former is permanently locked at 59.837 hz no matter what the OS says it is (I can make it claim 120 hz if I want) and the latter actually provides 44060 kHz instead of 44100 kHz for CD audio. Yes, really. I wrote a lot of code to figure those out. The only thing capable of stutter-free playback on my machine is the movie player I wrote myself.
I could write a book on the subject, but it wouldn't help anyone but me.
Bottom line is, assume nothing. If you want stutter-free play, the only sure-fire solution is to sync to vsync and variable-rate-resample the audio, with no external clock. For odd framerates or vsync off, throw in a frame timer, but
don't turn off audio autoadjustement, or make that a seperate option.
I got fancy and added a judder DDA as well, duplicating or dropping frames in a fixed integer ratio (double every frame for 60 -> 120, every sixth for 50 -> 60, etcetera), but then I have to support every input and output framerate under the sun. Probably feature creep for you, but it's a lot smoother than the timer-based approach if you really can't change your display framerate.
Sour wrote:
darkhog wrote:
It goes to about 200fps on maximum speed. I'm plugged in (using the computer as a workstation rather than a laptop as I can't afford a real pc at the moment) and not in power saving mode.
//edit: Do you plan adding support for the .deb files (debug symbols used by fceux)?
I think I may know what's causing it to run slower than 60fps at times. Essentially if your PC wakes up too late from the sleep calls used to regulate the speed, it never tries to make up for that lost time and the code can end up running at less than 60fps. If that's what's causing the issue, this build should hopefully fix it:
https://www.mesen.ca/MesenRunSpeedFix.zip Let me know if this build changes anything on your end.
As for .deb files, I had never actually taken a look at them before, but they appear to be binary files, and only contain the breakpoints/bookmarks and a few other options - none of these can really be imported into Mesen reliably. The labels are in the .nl files, which can't be imported either, but it might be possible to add support for those, but I'm not sure they can be perfectly mapped to Mesen's way of managing labels, I'll take a another look at this when I get a chance.
Nope, if anything it made it run even worse, sometimes now it runs TOO fast (i.e. over 60fps) which leads to sound skipping. And the original problem persists. I've noticed it is more likely to happen after alt-tabbing.
rainwarrior wrote:
An NES + DBG file is attached. (
github source) A similar example to above can be found at the label "menu_title_redraw".
This should be fixed as of the latest commit (and the X register should properly return 1 for PAL NSF files). Haven't implemented the NSFe extension yet, though.
Let me know if there's anything else!
darkhog wrote:
Nope, if anything it made it run even worse, sometimes now it runs TOO fast (i.e. over 60fps) which leads to sound skipping. And the original problem persists. I've noticed it is more likely to happen after alt-tabbing.
Yea, in hindsight, I should probably have spent a bit more time thinking about this before posting that build. Here's another build that should be better:
https://www.mesen.ca/MesenTiming.zipAt the very least, when adding random 1-25 ms delays between frames, it's able to keep a fairly stable framerate (59-61fps) and the sound is mostly fine. The same delays with the 0.9.6 code causes a lot of static and the FPS drops to < 55, so hopefully that's a good sign.
Rahsennor wrote:
Bottom line is, assume nothing. If you want stutter-free play, the only sure-fire solution is to sync to vsync and variable-rate-resample the audio, with no external clock.
The problem with vsync is that then you're not actually emulating the NES' actual speed (unless you process 2 frames between a vsync at some point to catch up, etc.), although I agree the difference is pretty small. That being said, Mesen's video/audio sync (obviously) needs some work still - the problem is that all the computers I have access to tend to run it properly, it's always on someone else's computer that the problems show up, which makes it pretty hard to debug things on my end.
Sour wrote:
The problem with vsync is that then you're not actually emulating the NES' actual speed (unless you process 2 frames between a vsync at some point to catch up, etc.), although I agree the difference is pretty small. That being said, Mesen's video/audio sync (obviously) needs some work still - the problem is that all the computers I have access to tend to run it properly, it's always on someone else's computer that the problems show up, which makes it pretty hard to debug things on my end.
I did say 'if you want stutter-free play.' My entire project is one giant war on stutter, up to and including rolling my own MC-FRUC lib to actually increase the framerate. Your needs/goals are different. For one thing, most of your users are probably less OCD about it than me.
But my point is, timers will always stutter. There is nothing you can do but accept that the CPU and GPU aren't on the same clock - even if it works on your machine, it won't on someone else's. If you're after 100% accurate-to-hardware speed you can either live with it or tell the user to set a custom modeline for 60.1 hz - if their hardware even supports it.
Sour wrote:
I'm not too sure what nemulator syncs itself against, though (e.g vsync or a timer).
vsync, by default, but a timer can be used as well (especially useful for gsync/freesync).
Sour wrote:
The only thing Mesen does at the moment is force the Windows-wide timer resolution to 1ms (only while the emulation is actually running), to help reduce the chances of sleeps being way off target (default resolution is 16ms, which is pretty horrible).
There are a couple of other things that help to varying degrees: 1) Increasing thread/process priority. 2) Using the Multimedia Class Scheduling Service (look at AvSetMmThreadCharacteristics()). Setting power settings to high performance does seem to have the biggest impact on jitter but, as koitsu pointed out, changing that from the emulator is a bad idea.
Sour wrote:
rainwarrior wrote:
An NES + DBG file is attached. (
github source) A similar example to above can be found at the label "menu_title_redraw".
This should be fixed as of the latest commit (and the X register should properly return 1 for PAL NSF files). Haven't implemented the NSFe extension yet, though.
Let me know if there's anything else!
darkhog wrote:
Nope, if anything it made it run even worse, sometimes now it runs TOO fast (i.e. over 60fps) which leads to sound skipping. And the original problem persists. I've noticed it is more likely to happen after alt-tabbing.
Yea, in hindsight, I should probably have spent a bit more time thinking about this before posting that build. Here's another build that should be better:
https://www.mesen.ca/MesenTiming.zipAt the very least, when adding random 1-25 ms delays between frames, it's able to keep a fairly stable framerate (59-61fps) and the sound is mostly fine. The same delays with the 0.9.6 code causes a lot of static and the FPS drops to < 55, so hopefully that's a good sign.
Rahsennor wrote:
Bottom line is, assume nothing. If you want stutter-free play, the only sure-fire solution is to sync to vsync and variable-rate-resample the audio, with no external clock.
The problem with vsync is that then you're not actually emulating the NES' actual speed (unless you process 2 frames between a vsync at some point to catch up, etc.), although I agree the difference is pretty small. That being said, Mesen's video/audio sync (obviously) needs some work still - the problem is that all the computers I have access to tend to run it properly, it's always on someone else's computer that the problems show up, which makes it pretty hard to debug things on my end.
Thanks, played few games that proved problematic in particular (such as Sachen's Pyramid, great puzzler and nice spin on Tetris formula, wish someone did a modern version of it, Pyramid 2 on the other hand... it sucked) and they work well on this build. FPS drops by themselves never bothered me, but the sound thing did.
James wrote:
1) Increasing thread/process priority. 2) Using the Multimedia Class Scheduling Service (look at AvSetMmThreadCharacteristics())
Yea, I seem to recall someone else mentioning that increasing thread/process priority right before sleeping helps reduce the odds of Windows waking up the thread far too late (although since most of the time is spent sleeping, probably simpler to always keep the priority higher).
Never heard of AvSetMmThreadCharacteristics() before, I'll take a look when I get a chance, thanks for the suggestion!
darkhog wrote:
Thanks, played few games that proved problematic in particular and they work well on this build
Great, thanks for the confirmation! I'll update the code on Github with those timing changes.
Is it possible to dump the nametables and OAM in a file using the same format as the "Copy Nametable" function when that frame has new tiles? This way I can see how the tiles are used and don't need to play the game up to that point again. Thanks.
mkwong98 wrote:
Is it possible to dump the nametables and OAM in a file using the same format as the "Copy Nametable" function when that frame has new tiles?
What do you mean by "when that frame has new tiles"? New tiles compared to what?
I just released
0.9.7, which is mostly a bug fix release, along with a number of small improvements/fixes for the debugging tools (mostly based on bug reports/feedback on the forums).
Slightly related, but after some testing & talking with gauauu earlier, it looks like Mono 4.2.2 (which was the minimum requirement I used to list for Linux) is pretty terrible at running Mesen. Using Mono 4.6.2 seems to improve the UI's performance a decent amount and seems to fix some of the crashes that could occur while using the debugging tools. I mention 4.6.2 because it's the default version for Mono on Ubuntu 18, but I imagine any other more recent version should work just as well (or better). So, if anybody on Linux is currently using a relatively old version of Mono, I'd recommend upgrading if you can.
Also, just a heads up, 0.9.7 is most likely going to be the last release for 2018 - I'm leaving on a 10-week trip to Japan in a little over a week, so I will not have any time to spend on Mesen until I get back home in December. I'll still be checking the forums from time to time while I'm away, though.
I didn't realize that for ubuntu but yes, it was quite fast with that version. If there was a way to make mono work on osx...
As for a trip to Japan, you are coming when the strongest typhoon occurs, hopefully there is not much left of them but this year is many, many weather related issues. The mid-south was affected by intense rain and Hokaido with a huge earthquake recently. If you go to Tokyo only then should be fine in general. Fukuoka if fine too
Downloading 0.9.7 now!
With some luck, there will be some progress with Winforms support on macOS eventually. I still want to investigate and try fixing the Wine performance issue eventually, too.
I'm landing in Tokyo on the 28th, so I'm hoping typhoon season ends before that (as far as I know, they're usually not too frequent in October, at least). I'll mostly be traveling back and forth between Tokyo & Kansai for the first few weeks, and then spending the last ~6 weeks or so in Fukuoka, so I should be fine for the most part (hopefully.) At the very least, the other 2 trips to Japan I went on around September/October weren't too bad, typhoon-wise (but maybe I got lucky!)
For now I'm happy to use it on my windows box since it allows to debug raster effect quite easily. The new build now show me a possible issue while animating palette that I saw on nintendulator but not on mesen until now so maybe this is an actual bug in my code then ^^;;;
When you pass by Fukuoka, depending on your schedule, we can go out for a coffee after work and talk nes related stuff since I'm working downtown
Hopefully some day but now it is only carbon based and because it is 32 bits, it is an issue with the latest version of macOS. The fun of macs... If it finally becomes available, it will help for mesen, that's for sure.
tepples wrote:
There's been some recent discussion about it (e.g
here), but I don't think there's been any actual progress on this so far. Looking a bit just now, it seems like System.Drawing is meant to work properly on Cocoa since
a commit in early august. Mono's WinForms is drawn exclusively with Mono's System.Drawing implementation, so that's probably a good step forward.
Banshaku wrote:
When you pass by Fukuoka, depending on your schedule, we can go out for a coffee after work and talk nes related stuff since I'm working downtown
Sure, just keep in mind that despite writing Mesen, I actually don't really have a clue how most NES games are built :p
I'm just going there on vacation, so it shouldn't be too hard to find some free time. Et ça te donnera une excuse pour parler français aussi :p
Sour wrote:
[Sure, just keep in mind that despite writing Mesen, I actually don't really have a clue how most NES games are built :p
I'm just going there on vacation, so it shouldn't be too hard to find some free time. Et ça te donnera une excuse pour parler français aussi :p
Je parle en français seulement le weekend alors je risque de chercher mes nots ^^;;; As for talking about nes, doesn't have to be, would love to know what are interesting things can be done in c# these days. Last time I used it was for my map editor and it was done in C# 2.0 and I will need to update a few things to allow some extra data to be managed. The problem is... I don't remember the code!
Sour wrote:
mkwong98 wrote:
Is it possible to dump the nametables and OAM in a file using the same format as the "Copy Nametable" function when that frame has new tiles?
What do you mean by "when that frame has new tiles"? New tiles compared to what?
I mean "New" as the first frame a tile id + palette combination appears.
Feature request: some way to get at the I2C-related internals of Bandai-based mappers/submappers which use it. Specifically
mapper 159, which is the one I'm dealing with at present (aren't you glad I didn't say 157?).
Minimal: I'd find Memory Tools -> View -> EEPROM Data helpful. Don't care if it's labelled "EEPROM Data" or "PRG-NVRAM" (what the Wiki and NES 2.0 header refer to it as).
Extensive: I'd love to be able to see I2C SDA/SDL somewhere, as well as direction (bit 7 of $800D), but not sure where UI-wise this would be appropriate since it's mapper-specific. Debugger seems most relevant, I guess? I don't know.
OT: This mapper sure caught me by surprise. And while I2C is generally something I'm familiar with (
I write stuff using SMBus), I sure didn't expect to see it on a Famicom cartridge. I would bet hard cash this was done to save money; the Bandai games that had this implemented were from 1990 (possibly very late 1989), which aligns almost perfectly with the
IC shortage in Japan (
further info). I'd bet the thought process went something like this: "we have our own mapper IC and we just need something that has a small amount of non-volatile storage, we don't need 8KB SRAM chips...", and the rest is history. Don't get me started on the endian ordeal...
koitsu wrote:
Feature request: some way to get at the I2C-related internals of Bandai-based mappers/submappers which use it.
Unfortunately, it looks like in my never-ending quest to support more mappers, I implemented these boards without actually ever getting around to implementing the EEPROM functionality. The code for the EEPROM register has been empty since I implemented the mapper over 2 years ago (and apparently you're the only person to ever notice or care enough to remind me :p)
I'm leaving on my trip in just 2 days, so it's unlikely I can find the time to properly implement it + integrate it into the debugger before then - I'll try to take a quick look tonight to see if it's simpler than what I'm expecting, though.
koitsu wrote:
OT: This mapper sure caught me by surprise. And while I2C is generally something I'm familiar with (
I write stuff using SMBus), I sure didn't expect to see it on a Famicom cartridge. I would bet hard cash this was done to save money; the Bandai games that had this implemented were from 1990 (possibly very late 1989), which aligns almost perfectly with the
IC shortage in Japan (
further info). I'd bet the thought process went something like this: "we have our own mapper IC and we just need something that has a small amount of non-volatile storage, we don't need 8KB SRAM chips...", and the rest is history. Don't get me started on the endian ordeal...
I find it interesting and amusing to read articles like these. Sometimes they get technical facts very wrong. The article lays blame on an SRAM shortage due to Japanese manufacturers shifting production lines to DRAM. "The two chips are similar, but SRAMs work more quickly and are less expensive." I think it should be more expensive, not less. "A competitor, Sega of America, said it isn't feeling the effects of the shortage as greatly because it relies more heavily on different memory chips." As far as I know, Sega used 8KB and 16KB RAM in its Master System console and cartridges, similar to Nintendo. "A typical personal computer, such as a system equivalent to the IBM-AT, needs 16 [256 kilobit] DRAM chips. A more powerful computer, such as IBM's System 30 model, requires 32 such chips." I assume they are referring to the IBM PS/2 Model 30, which on that date was using an 8086 CPU, certainly not more powerful than the IBM PC AT. It also doesn't use the kind of 256x1 chips the article describes and uses SIMMs.
By the date of the article, Nintendo only published six games using SRAM : Kid Icarus, Legend of Zelda, Metroid, Pro Wrestling, R.C. Pro-Am, Rad Racer. But the article identifies "a popular version of Donkey Kong", which as we know does not use SRAM if they are referring to the NES cartridge. It appears likely that the SRAM shortage spilled over into Mask ROM production. R.C. Pro-Am was revised to use CHR-ROM instead of CHR-RAM, probably due to the SRAM shortage by the end of 1988.
Great Hierophant wrote:
"A competitor, Sega of America, said it isn't feeling the effects of the shortage as greatly because it relies more heavily on different memory chips." As far as I know, Sega used 8KB and 16KB RAM in its Master System console and cartridges, similar to Nintendo.
The Master System used "XRAM" which was actually a DRAM with a controller to sequence access and refresh. We'd call it a "PSRAM" now.
Obviously that doesn't help for carts that used battery-backed save, but my understanding is that some number of SMS carts used Microwire EEPROMs for save games.
koitsu wrote:
I'd bet the thought process went something like this: "we have our own mapper IC and we just need something that has a small amount of non-volatile storage, we don't need 8KB SRAM chips...", and the rest is history.
And yet the VRC2, with its microwire interface, hit the market in 1987 ... but no games were ever released that used it.
Well, not
until that pirate port of one of the Bandai games.
lidnariq wrote:
"A competitor, Sega of America, said it isn't feeling the effects of the shortage as greatly because it relies more heavily on different memory chips." As far as I know, Sega used 8KB and 16KB RAM in its Master System console and cartridges, similar to Nintendo.
The Master System used "XRAM" which was actually a DRAM with a controller to sequence access and refresh. We'd call it a "PSRAM" now.
Obviously that doesn't help for carts that used battery-backed save, but my understanding is that some number of SMS carts used Microwire EEPROMs for save games.[/quote]
I didn't know about that kind of RAM, now I do.
I believe all Sega Master System games with non-volatile memory used battery backup, but there are a few Game Gear games which use EEPROM.
Sour wrote:
Unfortunately, it looks like in my never-ending quest to support more mappers, I implemented these boards without actually ever getting around to implementing the EEPROM functionality. The code for the EEPROM register has been empty since I implemented the mapper over 2 years ago (and apparently you're the only person to ever notice or care enough to remind me :p)
That's interesting, because Magical Taruruuto-kun - Fantastic World!! depends heavily on behaviour of $800D (
reminder; and this is an EEPROM game, not PRG-RAM) and there seems to be some generic support for $6000-7FFF (since the game cares about bit 4 of $7F00 quite a bit, though the value it gets back tends to be $67). So I guess there's just enough emulation to get the game functional? If that's the case, then maybe I'm worrying about nothing.
I should add the Wiki has two pages on this mapper. There's the above as well as
https://wiki.nesdev.com/w/index.php/INES_Mapper_159 , so I end up having to look at both to try and wrap my brain around things. Game in question is mapper 16 / sub 0 per NES header (not NES 2.0), but the Mesen/Nestopia DB forces mapper 159 / sub 0.
What exactly is unclear in the wiki pages on the Bandai FCG boards?
It would have been clearer to have left the Bandai FCG Board article as the master reference, added the clarifying differences to that page, and leaving the four mapper pages as stubs that just summarized the difference. Now the individual pages don't call out the similarities and differences in a useful way and are forced to say the same thing over and over, and the Bandai FCG Board article is the only thing that explains the differences in a useful way but is mostly orphaned.
Now we've got a Single Version of the Truth problem.
koitsu wrote:
and there seems to be some generic support for $6000-7FFF (since the game cares about bit 4 of $7F00 quite a bit, though the value it gets back tends to be $67). So I guess there's just enough emulation to get the game functional?
All bits currently return open bus in that range, except bit 4, which always returns 0, and bit 3, which will return the data from the barcode reader (when using mapper 157, otherwise it will return open bus).
I guess that's enough for the game to work? (I imagine it sees the EEPROM's content as being all 0s)
I did ask for feedback on the changes I made then and, as so often here, received none, only to have to read complaints later.
I have modified the pages to have mapper 16 provide an overview, and the others listing only the differences to mapper 16.
NewRisingSun wrote:
I did ask for feedback on the changes I made then and, as so often here, received none, only to have to read complaints later.
I have modified the pages to have mapper 16 provide an overview, and the others listing only the differences to mapper 16.
Greatly appreciated! So far this looks like a big improvement. So I just want to thank you again for consolidating things in this manner. :-)
The header of Dragon Ball Z: Kyoushuu! Saiya-jin often has the wrong serial EEPROM size (256 instead of 128), so the save feature of that game won't work on most emulators.
Yes, most NES ROM images were headered a long time ago, when mappers 153, 157 and 159 had not yet been assigned. I hope to soon be ready to release my NES 2.0 header-adjusting utility, which should take care of that problem.
Like I mentioned a few days ago, I'll be away from home starting tomorrow until early December, so I won't be able to make any fixes/changes until I get back (and will probably end up taking a few days to reply to any message I get here). So hopefully nothing goes horribly wrong during that time! ...if it does, you're on your own :p
Found a bug with label assignment. Not urgent, but I may forget to report later. Have a great trip!
Example code.
Code:
LDA $10 ; mouse click cursor here
PHA
LDA $20
PHA
1. I click F2, window for address $10 opens, type the label, click Enter to save.
2. Use arrows Down,Down to go to next address
3. Click F2 - window for address
$10 opens (should be $20)
A few issues with the database :
Olympus no Tatakai: Ai no Densetsu - Database claims this game has W-RAM, but it's U.S. counterpart, Battle of Olympus, does not. Both versions write to W-RAM after starting a new game or continuing an old game, but they do not read from it.
Business Wars: Database claims this game has battery backed W-RAM, but the game uses a password function to Continue. W-RAM should be the right value.
Dark Lord: Database claims this game has W-RAM, but this game is an RPG and according to a walkthrough I read, it uses a non-password save system. Battery backed W-RAM should be the right value.
Nestopia UE's recent v1.49 release has the same issues plus it only gives 8K to Dezaemon, which has 32K.
Found an issue with the debugger. Here's an example:
Code:
lda var
beq _is_zero
_is_not_zero: lda array, X
.db $2C
_is_zero: lda #$FF
sta v0
rts
The first time it ran, 'var' was non-zero, so it went through the array access path (lda, bit, sta, rts). Later on, 'var' was zero, so it went through the constant access path (lda, sta, rts). While I was stepping through, the debugger branched to the line "BIT $FFA9", but executed "LDA #$FF".
When you jump into the middle of an instruction, what do you expect the debugger to display, so that I can translate this into a proper issue for Sour to fix once back from vacation?
I would show whatever valid instruction the PC points to. In this example, LDA #$FF.
The debugger was highlighted on line $FE66, but the actual value of the PC was $FE67.
Thanks. Reported as
issue #513.
More or less a behavior with all emulator is something we found recently and affect only people that develop software. If by accident you define a chr a size that is not a power of 2, the emulator will run them well but it will fail on flash cart like the power pak when run.
Since flash cart are more common these days and so is homebrew, it would be a nice option to have some kind of warning to let you know that your rom is not valid in size. Is there such an option in mesen? If so, I want to activate it ^^;;;
Yeah some emulators like BGB (Game Boy emulator) have something like that I think. It lets you know things like if the internal header or CRC is invalid and also whether the game would run on real hardware or not due to the above problem.
On the Nametable Viewer screen, it would be nice to have an additional option for "Highlight Tile Updates" that would print numbers over the tiles to show the order in which those tiles were written to vram.
Also, is the visual cue different if a tile was written to more than once in a frame?
Another cosmetic issue with the debugger. This time with self-modifying code in ram:
Code:
ldy __center_y_pos
dey
dey
__tblptrlo_r0: lda SELFMODIFY_WT_PTRTBL, Y
sta __data_ptr
__tblptrhi_r0: lda SELFMODIFY_WT_PTRTBL, Y
sta __data_ptr + 1
The LSBs of the
LDA ABS,Y instructions are modified to point to different split msb/lsb arrays as needed. The debugger does not update the display to reflect the changes.
I have a 5KByte text file describing several reproducible bugs and feature requests, which I'll be posting once Sour returns from his vacation. It's all stuff that's pretty minor, although some are annoying/weird (like a problem stepping through certain RMW instructions). No crashes though! :)
Hi. When using the PPU Memory viewer tool, is there a reason it doesn't show the palette values at the end? Starting at $3F00. Fceux shows it, so I was wondering the reason for the inconsistency. Thanks!
That's because the palette is stored not in video memory but instead in a separate area of memory within the PPU chip that happens to be overlaid on $3F00-$3FFF using special case logic. Palette reads and writes don't actually go through the PPU's video memory bus.
(The actual behavior involves the 1-byte video memory read delay, and my explanation of it may confuse beginners.)
Why is that a reason not to show the palette memory at the address that you would read or write it? It's still what you'd find at that PPU address.
nesrocks wrote:
Hi. When using the PPU Memory viewer tool, is there a reason it doesn't show the palette values at the end? Starting at $3F00. Fceux shows it, so I was wondering the reason for the inconsistency. Thanks!
There is a "palette RAM" option to view it, at least, which might even be more convenient, but I agree that I'd expect the palettes to appear in the PPU memory view, and it would more intuitively represent how that address space is organized.
I don't think there's really value in showing the "underlying" RAM. In almost all cases it's just a mirror anyway, but in the rare cases that it's not I suppose the CHR RAM view suffices?
rainwarrior wrote:
Why is that a reason not to show the palette memory at the address that you would read or write it? It's still what you'd find at that PPU address.
Actually, palette RAM is overlaid on top of actual VRAM, so you could argue that the viewer in Mesen shows the contents of the memory chip rather than what you'd get if you hypothetically read that address via the PPU registers. There are normally only NT mirrors at $3F00-$3FFF, but there could be unique RAM there too, and it can be read with some mild trickery (
source). I'm not sure how you could write data there though, without bankswitching.
To me it makes more sense to show the underlying memory than the overlaid palette RAM, because AFAIK they're are no other means to see that data (if it's unique instead of mirrored), while palette RAM has its own dedicated viewer.
Sure it may be one of those "who's ever gonna need this?" cases, but the fact is that deliberately hiding something and making it inaccessible just because most people have no use for it is a bad design choice. Palette RAM is *NOT* part of VRAM and I feel like it's important to show it's treated differently (it's even read differently).
Yeah I agree, if it's the real accurate thing, then that's what it is.
I would like an export feature that had ppu chr0, chr1, nam, pal and oam all in one file. That would be for loading that into the tool I'm making. So far fceux has the complete bg part, while mesen offers the oam export.
tokumaru wrote:
Actually, palette RAM is overlaid on top of actual VRAM, so you could argue that the viewer in Mesen shows the contents of the memory chip rather than what you'd get if you hypothetically read that address via the PPU registers.
Yes, but this is a view of the PPU address space, not VRAM. The area from 0000-1FFF is another chip entirely, and bankable, etc. Even the VRAM itself might be replaced by other things.
An "internal VRAM" view might make sense (though TBH, I think it's easiest to understand in its place at $2000-2FFF) but I don't really get the purpose of the "underlying" concept here. It is simply not accessible by the PPU addressing, and the palettes are.
nesrocks wrote:
I would like an export feature that had ppu chr0, chr1, nam, pal and oam all in one file. That would be for loading that into the tool I'm making. So far fceux has the complete bg part, while mesen offers the oam export.
You can view the OAM in the hex editor in interim builds of FCEUX (and save it to disk).
nesrocks wrote:
I would like an export feature that had ppu chr0, chr1, nam, pal and oam all in one file.
All of this in a single file? That would make it harder to create labels for each part in a project that includes this file, don't you think?
Quote:
That would be for loading that into the tool I'm making.
Even if it's just for importing in tools, that means that the tools have to accept whatever file format the emulator author decides to use for packing all this stuff together.
rainwarrior wrote:
It is simply not accessible by the PPU addressing, and the palettes are.
It's readable at least... A read from $3F00-$3FFF will immediately return a value from palette RAM, but will also copy the VRAM value to the read buffer. Then, if you change the VRAM address to something below $3F00, you can read $2007 to get the buffered value. Like I said, this falls into the "who's ever gonna need that?" category, but even though they're mapped to different areas in the same addressing space, only writes behave the same, reads are completely different, so treating the whole thing as an uniform block of memory is quite misleading IMO. Palette RAM is more like OAM I think, only it didn't get its own addressing space and access registers like OAM did.
There doesn't need to be a format or labels. It's just sequences of raw bytes of fixed length at known locations. No wonder both emulator's PPU memory dump features
already work, with the exception of mesen's palette, which isn't exported on that file. I just read the correct byte ranges from the file and it works. Are there exceptions? It worked with most games, and the ones that didn't I figured it was because they change data mid-frame.
nesrocks wrote:
There doesn't need to be a format or labels. It's just sequences of raw bytes of fixed length at known locations. No wonder both emulator's PPU memory dump features
already work, with the exception of mesen's palette, which isn't exported on that file. I just read the correct byte ranges from the file and it works. Are there exceptions? It worked with most games, and the ones that didn't I figured it was because they change data mid-frame.
A dump either at the start of NMI, or end of NMI should be fine, but yeah mid-frame banking (or other rendering changes) is very common and will throw it off.
One way to support raster effects like that in a tool is to grab multiple dumps from the emulator and tell the tool in which scanlines the changes are supposed to happen. That should be enough to simulate the output of the emulator. Exporting the from the tool would be trickier though, you'd probably have to write code for specific mappers to handle the necessary raster effects.
nesrocks wrote:
There doesn't need to be a format or labels.
Labels aren't necessary if you're importing the data into a tool, but they're convenient if you're importing from an .asm file. I guess you can manually create labels by adding offsets and such, or INCBIN sections of a larger file, but it's not as versatile (e.g. compression for examplegets very impractical). And by "format" I mean basically the order in which things are stored in the large file. Does OAM come before or after VRAM? Is palette RAM at $3F00 or as a separate chunk? Things like that.
Yeah but isn't it possible to have mid-scanline changes? That'd be crazy. Maybe stop the support at mid-frame changes, but at full scanline. Anyway, this is just a bonus feature on my tool, I guess adding support for all effects would be overkill and out of purpose. I don't know of any practical reason to add such a feature to an emulator anyway.
It would be magical though to include OAM at the end of the file to load bg+sprites with one click, but I guess it doesn't hurt for it to be a two step process.
tokumaru wrote:
It's readable at least... A read from $3F00-$3FFF will immediately return a value from palette RAM, but will also copy the VRAM value to the read buffer.
Not only that, but the older PPUs (rev B/C, all the Vs. System alternate PPUs, maybe others?) reads from $3Fxx just go from the underlying RAM: there's no way to read the palette RAM.
lidnariq wrote:
Not only that, but the older PPUs (rev B/C, all the Vs. System alternate PPUs, maybe others?) reads from $3Fxx just go from the underlying RAM: there's no way to read the palette RAM.
Woah, now that I did not know about! (I never had a use for reading the palette, personally, but I did test it on my NES! I never realized it wasn't always supported.)
I can actually understand that as a conceptual reason to do the PPU memory viewer this way... though I still think for practical purposes: people will expect palette data to appear where it was written and it's useful to be able to find it here (nesrocks demonstrates one reason why), and I doubt anyone will have practical use for reading CHR RAM data from here.
nesrocks wrote:
Yeah but isn't it possible to have mid-scanline changes? That'd be crazy. Maybe stop the support at mid-frame changes, but at full scanline. Anyway, this is just a bonus feature on my tool, I guess adding support for all effects would be overkill and out of purpose. I don't know of any practical reason to add such a feature to an emulator anyway.
If you know how the raster split is done, you can set breakpoints on the write that causes it and do a couple of dumps. I've done this several times to make diagrams...
Actually on Mesen you can use "Scanline == 100" or similar for a breakpoint condition, so maybe it's pretty easy to manually set a few breakpoints to dump a frame, which is pretty cool! (I haven't yet done this in Mesen, but noticing it's there I'm sure I will someday.)
Edit: ...or you can click on the "Break on..." button at the top right and type a scanline number for a one time advance-and-break to a specific scanline! I didn't notice that before. Ha ha Mesen is so good.
never-obsolete wrote:
Found an issue with the debugger [when jumping into the middle of an instruction]
Should be
fixed as of
revision 70ad89a.
Mesen 0.9.7 seems to have a substantial memory leak when using the Event Viewer while the emulator is not paused. It got as far as ~119 GB of private bytes before starting to give me OOM exceptions such as this:
---------------------------
Mesen
---------------------------
An unexpected error has occurred.
Error details:
System.OutOfMemoryException: Out of memory. at System.Drawing.Graphics.CheckErrorStatus(Int32 status) at System.Drawing.Graphics.DrawImage(Image image, Int32 x, Int32 y, Int32 width, Int32 height) at System.Drawing.Graphics.DrawImage(Image image, Rectangle rect) at System.Windows.Forms.PictureBox.OnPaint(PaintEventArgs pe) at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer) at System.Windows.Forms.Control.WmPaint(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
---------------------------
OK
---------------------------
Fiskbit wrote:
Mesen 0.9.7 seems to have a substantial memory leak when using the Event Viewer
This was fixed a couple of weeks ago (someone else reported the same problem on GitHub). If you try the latest appveyor dev build, the leak should be gone.
Not sure if this is intentional but:
Code:
lda $4B
beq +
dec $4B
+
I had a RW- breakpoint set on $4B. Stepping through the DEC ZP in the debugger requires multiple clicks to actually get to the next line. I'm assuming because of the way RMW instructions work.
This was the eventual pattern that emerged:
Code:
condition clicks
R-- 2
-W- 3
RW- 4
@never-obsolete: I just complained about this in a PM with Sour this weekend, actually. Yes, it's a specific to RMW instructions. I don't want to include the PM conversation publicly without his consent, but I did mention that this behaviour is somewhat non-intuitive, debugger-wise, but the counterargument is that it's actually highly relevant for people doing cycle-timed code (and that's a legitimate purpose). I felt it was something that should be discussed community-wise. He stated that an option/checkbox in one of the menus to toggle break-once-per-instruction (vs. per actual T-state read or write) might be possible.
So it's breaking on both a read and a write in the same instruction? Honestly I wouldn't find this helpful in any situation, timing cycles or otherwise.
Due to conditional breakpoints, among other things, breakpoints are evaluated every single PPU cycle.
Write breakpoints are triggered by the actual memory writes (e.g the last or before-last cycle of instructions) - there is nothing that "predicts" that the instruction will eventually write to an address and break ahead of it.
On operations with dummy reads or doubled up writes, this means that the breakpoint might be hit multiple times.
Dummy read/writes can be the source of bugs (e.g reading some PPU registers with an instructions that contains a dummy read, some mapper registers might be affected by them as well, etc.), which is why the debugger does not attempt to "hide" them.
Additionally, it's possible for 2 separate breakpoints to trigger at different parts of the same instruction (e.g maybe an instruction with a regular execute breakpoint also ends up triggering a conditional breakpoint that breaks whenever the ppu reaches cycle 30 on scanline 100), in this case, the debugger will break up to once per PPU cycle, whenever a matching breakpoint exists.
Like I mentioned to koitsu, having a "only allow breakpoints to break once per instruction" option would solve most of these situations, but it will require a bit of extra logic beyond just ignoring all subsequent breakpoints (e.g because users would except the break to occur on the last write of an instruction rather than the "dummy" write that writes an incorrect value just before, etc.)
Sour wrote:
Like I mentioned to koitsu, having a "only allow breakpoints to break once per instruction" option would solve most of these situations, but it will require a bit of extra logic beyond just ignoring all subsequent breakpoints (e.g because users would except the break to occur on the last write of an instruction rather than the "dummy" write that writes an incorrect value just before, etc.)
I don't think it's necessary for this option to split that hair, for the same reason I would definitely turn it on. This is how I would normally expect any debugger to work, 6502 or otherwise:
1. The break should always be before the instruction begins, so you can inspect the state of the machine before it runs. Should not be looking at any instruction partially-executed. PC should still match the instruction's address.
2. One step should advance one instruction, all of its reads/writes at once and proceed to the next instruction. Can now inspect state of the machine after that instruction has finished.
Yes, I get that the instruction takes several cycles, and even finer than that we can subdivide into pixels. That's
really cool to be able to dive into if you want, but it's really far removed from being useful for most debugging purposes, and I also think it's unintuitive for most users to have to understand sub-cycle info, or understand that your debugger is handing them breakpoints in the middle of an instruction. I've never encountered any debugger that behaves this way (they may exist, but I haven't seen it), an instruction should be an indivisible unit by default. I'd been confused before about the PC after breakpoints in Mesen, for example, but I hadn't realized until just now that this is why. Hadn't yet understood that it wasn't just me misreading things, or that it was actually caused by this unusual behaviour.
It's very much important to trap breakpoints on the extra reads and writes, that part of it is critical to include, but at least for myself, the sub-instruction concept for breakpoints is way out of left field. I didn't expect it at all.
Instead of a "one breakpoint per instruction" option, I'd make a counter suggestion: by default, don't subdivide instructions: any breakpoint or halt of execution should rewind to the beginning of the current instruction, and any run/step should advance at least one instruction before it can stop/break again.
Instead have an "advanced" option for subdividing instructions which takes this limitation off. At that point allow the "run one PPU cycle" button, maybe allow a "run one CPU cycle" button as well. An instruction sub-cycle indicator somewhere in the UI would help a lot understanding where we are. (Maybe a dream beyond this could be a little
diagram of the current instruction in another panel indicating which step in its sequence we are currently on.) If I could see this stuff indicated clearly, it would actually be really amazing, but definitely in an "advanced" category of features that I think having on by default detracts from normal use.
The main problem with the way it is currently is that I have no idea where in an instruction I am when a breakpoint hits. The PC is likely incremented to the next instruction already for most breakpoints, there's no indicator on which subcycle I'm on or when the current instruction started. Sort of what I was saying above, I noticed little things were not what I expected, but couldn't tell what was bothering me until koitsu pointed out this is an actual feature.
It's really neat to
be able to go cycle by cycle in an emulator like this. I think this is an unprecedented feature, and am really impressed, but I also think that it gets in the user's way; much better to keep it off until they ask for it. Especially trying to step through code, if it starts requiring uneven number of presses per instruction to advance... that seems like a real problem
unless your goal is specifically to look at very minute sub-cycle details.
rainwarrior wrote:
Instead of a "one breakpoint per instruction" option, I'd make a counter suggestion: by default, don't subdivide instructions: any breakpoint or halt of execution should rewind to the beginning of the current instruction, and any run/step should advance at least one instruction before it can stop/break again.
Easier said than done, unfortunately. At best, I could take a savestate every frame and replay the emulation from the last state when trying to rollback the emulation after encountering a breakpoint (which would only result in maybe 5-10 milliseconds of delay even on slower computers). But there are a number of problems that come up when doing this (e.g the trace logger's log wouldn't be correct because of the rewind operation, other things like the profiler, call stack, etc. are also most likely affected), which are fine as "limitations" to the step back feature, but not acceptable if they are triggered every time a breakpoint is hit. It's not impossible to get done, but it involves a fair amount of work. Making it so stepping always runs the current instruction completely before breaking again should be pretty simple, though.
Other than the PC value being "odd" at times when breaking the execution on a read/write, I don't believe there is really anything else that's meaningfully affected by this, though? The execution does break before the reads/writes take place, it just doesn't break at the beginning of the instruction. The PPU does end up running for a few extra cycles before breaking, though.
The debugger window used to cheat and always display the PC as it was at the beginning of the instruction (regardless of the CPU's current state), but I ended up changing it when I added the functionality to edit the emulator's state from the UI, IIRC.
I was under the impression that you'd already implemented rewinding. How does the current "step back" function work?
Anyway, not trying to tell you what's easy to implement or not; just was going from that assumption.
rainwarrior wrote:
Other than the PC value being "odd" at times when breaking the execution on a read/write, I don't believe there is really anything else that's meaningfully affected by this, though? The execution does break before the reads/writes take place, it just doesn't break at the beginning of the instruction. The PPU does end up running for a few extra cycles before breaking, though.
If you're on the second write of an instruction with a dummy write, you would see the dummy value stored there, which would be different than from the beginning of the instruction?
Conditional breakpoints based on the value read and written might have very confusing consequences if you don't think about the dummy write in between. e.g. if you wanted to break when something changes from one value to another, an intermediate value might prevent the breakpoint entirely? I'm actually wondering if I might have run into this earlier... may do some tests later.
When debugging I'm often doing it because I'm unsure about my own code, so if the debugger is doing something I didn't forsee, it's more likely that I'd expect there is a problem with my code than the debugger. Having to think about an intermediary value could be a really hard problem. Synthesizing sub-instruction values in my head is something I will never be able to do, and that's sorta what I mean when I suggest that the debugger should present these things in a simpler looking way by default. Having to think about it as a prerequisite for being able to understand how the debugger is acting is a layer of complexity that feels like an encumbrance. It'd be great to be able to crack open an instruction in the debugger, cycle by cycle, but the need for it in my view is exceedingly rare.
I've needed to know about sub-cycle things a few times when thinking about how mappers work, but I've really never needed to know it when debugging, even when doing finely timed stuff like raster effects and PCM sound playback. In the latter case FCEUX's cycle counter in its debugger is actually extremely good for timing instruction to instruction, and I know exactly where it goes from/to because it always starts at the beginning of an instruction, which is the same way I have to write and think about the code. In the case of raster timing, I don't normally have the luxury of landing an instruction on a specific cycle anyway; what I need to understand is the range of timings where the instruction can happen. Sub-instruction steps have never seemed that important for it. They're important for the
emulator to get in the right place, of course, but for writing NES software it's an internal detail I don't need to think about as long as it's consistent.
Even something more mundane like knowing the PC is a different value later in the instruction seems more like a vector for confusion to me than useful knowledge about the instruction's operation. (Maybe even critical to get this correct as an emulator author, but when writing software, I don't see the utility.)
Again, really impressed that Mesen even has this level of detail in its debugger, and I think it's fantastic as an extra feature, but only if it doesn't get in the way of more fundamental things. ...and probably I'm making suggestions that would be hard to implement, but I'm only trying to offer my opinion of how I expect a debugger to operate. (At the very least, I strongly recommend putting some sort of "sub instruction" indicator somewhere so I can at least see how many cycles into the instruction we have reached.)
So, that's kinda what I meant in my
more succinct post above, but probably needed elaborating. I'm sure others might have more thoughts on this, or differing thoughts, would be glad to hear them.
(Apologies for the repeated comparisons to FCEUX, but I've been using it for 15 years, so it's what I know!)
I definitely have a preference for FCEUX's approach of breaking at the start of the instruction, but I totally understand why Mesen does it the way it does and that it'd be hard to change it. Some of this overlaps with what rainwarrior said, but as a user, I'd say the pain points of the current breaking behavior (assuming I'm not overlooking things that address these) are the following:
- When I break on something, I can't then modify register state to reliably affect future execution. I break on a STA, change A, and the old value is the one that gets stored (but A was changed and will affect the next instruction). There's internal state I'm not allowed to see or modify.
- There's no visibility into where I am within an instruction. Am I at the start or several cycles in? Instructions have multiple different steps and those are normally entirely hidden, but then they suddenly matter when breaking happens without the information being surfaced. Just knowing the current cycle of the instruction would be a big help, even more if something (instruction tooltips?) also laid out what happens on what cycle.
- It's not always clear why we've snapped on an instruction, such as with a RMW instruction potentially triggering 3 (or more, if also using execute) times. This strikes me as part of a larger issue where I often find myself wondering 'why am I here?' when the debugger snaps. Maybe it's a breakpoint (which one?), or an invalid instruction, or...? Highlighting the triggering breakpoint would be helpful, and/or maybe having a status indicating why the snap happened.
- Forbidding breaks on specific instructions is cumbersome. In FCEUX, I can create a breakpoint on an address and check Forbid so that that instruction won't trigger breakpoints anymore. In Mesen, I can modify the condition on a specific breakpoint to add a PC exemption (much less convenient, but OK), but this doesn't work because PC has been incremented by the time the instruction does the action we break on, so it's actually some unaligned PC that I need to exempt. I need to know what that intermediate PC is, or exempt a range of PCs per instruction (per breakpoint).
I agree with rainwarrior that sub-instruction debugging is really cool, but I'm unsure when I'd want to use it. I have a lot of experience doing carefully timed, cycle-precise code for things like screen splits or input reading and while this debugger behavior could maybe provide some value for such work, I've generally found that cycle-counting and the event viewer tend to be enough. I definitely like that it's there, but it seems to be overkill for the vast majority of use cases and can currently get in the way of clarity and some functionality I expect from other debuggers I've used. Things like these make me still reach for FCEUX a lot of the time when I don't need Mesen's more powerful features just because FCEUX is often easier, though I really love the feature set Mesen provides and its high level of accuracy (and I have been using it more and more as I get more used to it).
rainwarrior wrote:
I was under the impression that you'd already implemented rewinding. How does the current "step back" function work?
It's implemented but like they say, the devil is in the detail :) When you use the step back function, it will affect a number of debugger-related state (the trace logger's log, profiler, access counters, and potentially more that I'm forgetting right now?) This is "ok" for a relatively minor feature (and not used excessively often) like step back, but if I were to apply this as-is to breakpoints, it wouldn't be ideal.
That being said, thanks for the feedback, I appreciate it. I agree that it's better/simpler to break at the start of an instruction at all times (at least by default). I could spend some time trying to argue some points or to explain why it might be relatively hard to pull off, but it's probably better to spend that time actually trying to implement it :p
On top of this, I think the idea of displaying "where" inside a specific instruction the debugger is currently at (e.g in the case where you re-enable sub-instruction breaks) is a pretty neat idea, too. Displaying which breakpoint triggered the break is also a good idea (it's been on my todo list for a while, too).
Fiskbit wrote:
I can modify the condition on a specific breakpoint to add a PC exemption (much less convenient, but OK), but this doesn't work because PC has been incremented by the time the instruction does the action we break on
IIRC, there is a "OpPC" value that can be used specifically to address this issue (it's documented inside the help tooltip for conditions)
Haven't quite gotten around to making breakpoints break at the start of the instructions yet. I actually just took a look at how FCEUX did this and it seems like it just uses some code to calculate the final read/write address of an instruction in the debugger ahead of time. This ends up copying a tiny portion of the CPU's logic in the debugger, but will actually be far easier than attempting what I was considering so far - turns out predicting the future is a lot easier than going back to the past :)
That being said, in the meantime I've been working on making the current behavior easier to understand. I've added UI elements to let the user know what caused the execution to break (e.g a breakpoint or an option, etc.) and also display the current instruction's "progress" (e.g what cycle are we on, is it an execute (X), read (R), write (W), dummy read/write (DR/DW) cycle, etc.). These can be turned off via the options menu ("Show break notifications" and "Show instruction progression")
Attachment:
CodeWindow1.png [ 20.82 KiB | Viewed 4918 times ]
Attachment:
CodeWindow2.png [ 4.91 KiB | Viewed 4918 times ]
Attachment:
CodeWindow3.png [ 15.72 KiB | Viewed 4918 times ]
One thing to note is that the "progress" shows an "estimated" total cycle count for the instruction. Crossing pages or taking branches will increment the total cycle count as needed, during executing (but the original value when the operation starts will not be correct 100% of the time). e.g a branch will start off as "1/2", but if the branch is taken and crosses a page, it will end up going 1/2 -> 2/2 -> 3/3 -> 4/4.
And also added an option in breakpoints to determine whether dummy read/writes should trigger the breakpoint (off by default):
Attachment:
Breakpoints.png [ 11.61 KiB | Viewed 4918 times ]
These changes should hopefully improve the current debugging experience - any feedback on these is more than welcome (the latest appveyor build
[Windows] [Linux] contains these changes).
How does Mesen determine SRAM size for MMC5 games that have an unknown CRC and no NES 2.0 header?
There's a standing
recommendation to default to 64K. When w7n (a member of the FamiTracker users' Discord server) complained about an emulator not following this, I recommended that ROM authors could specify the correct RAM size in an NES 2.0 header. But w7n claimed that it would be just as easy for each emulator author to default to 64K for pre-NES 2.0 headers, as specified on the wiki, as it would for each ROM author to switch from the old header to NES 2.0.
EDIT: I'm asking this with respect to multiple emulators' MMC5 implementations, as each may or may not be affected by this design decision.
Mesen currently forces 64kb of SRAM for all MMC5 games, with no exceptions. Even NES 2.0 headers cannot be used to override this at the moment (which is a problem, and should be fixed, actually)
This is 100% a false alarm; Mesen has only ever supported 64 KiB SRAM regardless of the header.
Code:
$ git blame MMC5.h | grep GetSaveRamSize
b4489ed0f (Souryo 2016-12-17 23:14:47 -0500 274) virtual uint32_t GetSaveRamSize() override { return 0x10000; } //Emulate as if a single 64k block of saved ram existed
$ git blame b4489ed0f^ MMC5.h | grep GetSaveRamSize
dffc03ad6 (Souryo 2015-07-29 22:10:34 -0400 268) virtual uint32_t GetSaveRamSize() { return 0x10000; } //Emulate as if a single 64k block of saved ram existed
$ git blame dffc03ad6^ MMC5.h | grep GetSaveRamSize
fatal: no such path Core/MMC5.h in dffc03ad6^
edit: sniped by the author himself
Sour wrote:
Haven't quite gotten around to making breakpoints break at the start of the instructions yet.
Alright, so after a bit more fumbling, I finally have something that works and is reasonably efficient. (Appveyor dev builds:
[Windows] [Linux])
Essentially the debugger executes the upcoming instruction on a "dummy" CPU that can't affect the emulator's state, keeps track of the reads/writes done by the instruction and then breakpoints are evaluated based on that information, before ever executing the instruction. Since the debugger is only allowed to read APU & PPU registers during this process, it's not perfect (e.g if you try to use a conditional breakpoint based on the value read from the MMC5's multiply register, it won't work as expected, but the old breakpoint system will). For the vast majority of cases, it should work as expected, though.
This is slightly slower than the "old" breakpoint system (because it essentially requires executing every CPU instruction twice to predict the read/write patterns properly), but in practice the difference in performance is less than 10% (and the typical cases where no read/write breakpoints exist run as fast as they did before).
The option to control this behavior is Options->Break options->"Enable sub-instruction breakpoints" (if anyone has a better idea for the name, let me know..). When enabled, it will revert to the 0.9.7 breakpoint system of breaking in the middle of instructions for read/write breakpoints, etc. The option is disabled by default, so the default is now similar to FCEUX's behavior. In this new mode, the debugger will also only break a single time per instruction, no matter how many different breakpoints match.
One thing to keep in mind is that the "Run PPU cycle/scanline/frame" options can still break in the middle of instructions, but other than this, the execution should now always break at the start of the instruction. It's also now possible to edit the value that is about to be written (e.g having a write breakpoint on a STA instruction and changing the value of A before stepping will write the updated value - this was one of the issues described by Fiskbit in his post).
Hopefully this makes working with Mesen's debugger more intuitive from a homebrew/romhacking point of view.
Let me know if you find any bugs with this (it's a fairly large change, so bugs are somewhat likely) or if there are any other concerns with regards to usability vs FCEUX, etc.
As suggested by yaros in another thread, I added a "Go To All" feature to the code window in Mesen. This behaves more or less like the same feature in Visual Studio - you type some text, and it finds whatever it can that matches it (labels, symbols, constants, registers, files). The default shortcut is Ctrl+comma (mostly because that is one of the default shortcuts for it in VS), but it can be customized like all the other shortcut keys.
This is what it looks like on a project using CC65 integration w/ .dbg files:
Attachment:
GoToAll.png [ 21.41 KiB | Viewed 4018 times ]
You can navigate the result list with the up/down/page up/page down keys, or just by scrolling/clicking with the mouse. Pressing enter or double-clicking on an entry will close the window & navigate to that location in the code window.
When a specific label isn't available to the debugger (e.g because it's been switched out of memory), the debugger will automatically switch to source view to display the original code.
Each entry lists some basic information: the offset in PRG ROM (or work ram, etc.), the current memory location in CPU memory, the current value stored at that address, and the source file & line number (with CC65 integration).
The icons indicate the type of data:
-Functions
-Jump/Branch targets (this isn't perfect quite yet since the debugger loses this information every time it is restarted)
-Labels that are defined as registers
-File
-Constant
-And another "hex code" icon for anything else
When used on a project with no CC65 integration, the debugger can't display code/data that's been switched out of memory, so the entries will be shown last in the list, with a warning icon (and they will be disabled):
Attachment:
disabled.png [ 22.65 KiB | Viewed 4018 times ]
There's still a bit of work to do - e.g I'd like to make the search logic a bit better than it is now and I want to add the same feature to the hex editor window, too. As usual, any feedback is welcome! (The latest appveyor build has the feature)
...and Mesen is now included in the group of programs that mysteriously refuse to run on my computer. Double click, busy cursor, then nothing. No error messages or anything. It's probably not Mesen's fault, as this was already happening with Microsoft Store apps (Calculator, Calendar, even the Store itself, which don't get past the splash screen), but it's really frustrating that my installation of Windows 10 keeps doing this. I can't do anything if Windows won't give me any hints as to what the problem might be. Maybe it's time I formatted this thing already...
Does it also fail to print anything if you start Mesen.exe from the Command Prompt?
I would have guessed that your .NET installation might be broken, but the calculator/etc most likely use UWP (and may or not use .NET at all). You could try running
Visual NES to see if it opens up properly (it pretty much uses the same UI vs core pattern as Mesen does, minus the need for DirectX/etc.).
Beyond that, you could try creating a new profile on your computer and seeing if applications work any better with that profile, too.
hi Sour,
I am creating labels in your debugger, by pressing F2, and I have failed to find a discussion here about how to create 16bit labels. We have this pointer called "p43" and so I've tried:
1.) Naming $43 and $44 "p43" but, Mesen doesn't allow identical labels.
2.) Naming $43 "p43+0" and $44 "p43+1" but, Mesen doesn't allow the + in label names.
3.) After 1 and 2, I failed to find a discussion about "16bit labels" so I visited your appveyor and downloaded the latest Mesen to see if 16bit labels had been added, but haven't discovered 16bit labels.
4.) In the newest Mesen, I tried to change the address on the Edit Label screen to values such as "{43}" or "43, 44" but, haven't been successful in creating a 16bit label.
Therefore, I suggest providing some simple way to create multibyte labels... like if I choose label name "p43" for 2 bytes, would expect to receive "p43+0" for $43 and "p43+1" for $44.
Maybe you've already implemented 16bit labels; could you explain how to use them?
Thank you so much for reading through my Mesen distress.
Somehow, something about the way you wrote this made me realize I could actually implement this relatively easily (despite thinking it would be relatively complex up until now), so I did!
Attachment:
multibytelabels.png [ 32.78 KiB | Viewed 2617 times ]
This adds a "Length" field to all labels (so works for both 2-byte words and plain old arrays).
It also works w/ CA65 integration - so all symbols from a CA65 project will now have their proper length in the disassembly, etc.
One thing I was wondering is if it was better to show "+0" for the first index or to only show the label's name - in the end I opted to show the "+0", since it makes it obvious that the label is a multi-byte label.
The main thing left to complete this would be adding this length field to the ".mlb" file format - will take a look at that later.
The next appveyor build should have these changes - if you find any issues with it, let me know!
Good one!
YEAY SOUR! THANK YOU SO MUCH!
I love how it creates p43+0 and p43+1 while keeping the label p43 in the label list! Great work; this is so much fun!
So happy my writing could make a difference... thank you for telling me!
um... Sour, this isn't a "problem" but, maybe it's an issue? When naming a function the length value was still there... so after pressing F2 again I increased length to 2 and that didn't make any changes; however, after pressing F2 again, and increasing length to 4 it created two identical labels both ending with "+2". This isn't a "problem" because switching length back to 1 removed those extra labels.
I thought an easy solution would be to eliminate the length option if the address is 16bits... but I just think, now, that that would allow only zeropage addresses to use length. Maybe only allow length for RAM labels; is that even possible?
I haven't tried to use length on a non - zeropage RAM address.
^edit: Sorry, that function starts at $D3D3 and it has two reverse branches to $D3D5 later on in its code... so the two identical labels ending with "+2" were correct. (Also noticed that every function call ended its label with "+0".) So non-zeropage RAM addresses should work just fine with length >1.
You mean it did something like this? (this code makes no sense, but just for illustrative purposes)
Code:
mylabel:
LDA #$00
LDX #$00
BNE mylabel+2
LDY #$00
BNE mylabel+2
Where "mylabel+2" would normally be a branch to the 2nd line (the "LDX #$00" instruction)
In that case, it's probably ok, I think. This isn't likely to happen (and it's not actually giving false information, either). Multi-byte labels in PRG ROM are useful for hardcoded byte arrays, etc., so I can't really disable them altogether.
Internally, the debugger is creating multiple labels (e.g: mylabel+0, mylabel+1, mylabel+2, mylabel+3), but I made it so only the first label is shown, and without displaying the "+0", which is why it still says "mylabel:" rather than "mylabel+0:" on the first line. Otherwise the display for non-disassembled data arrays was a mess, e.g:
Code:
mydata+0:
.db $00
mydata+1:
.db $00
mydata+2:
.db $00
mydata+3:
.db $00
Instead of just:
Code:
mydata:
.db $00 $00 $00 $00
tepples wrote:
Does it also fail to print anything if you start Mesen.exe from the Command Prompt?
Yup. Nothing at all.
Sour wrote:
I would have guessed that your .NET installation might be broken, but the calculator/etc most likely use UWP (and may or not use .NET at all). You could try running
Visual NES to see if it opens up properly (it pretty much uses the same UI vs core pattern as Mesen does, minus the need for DirectX/etc.).
Visual NES does work!!
Quote:
Beyond that, you could try creating a new profile on your computer and seeing if applications work any better with that profile, too.
That didn't work for the Windows Store Apps, and I don't really feel like wasting any more time trying to fix this... I'll just format the hard drive and start from scratch. Thanks for the help.
One more thing that might be worth trying is starting up Mesen with these command line options to disable all directx components:
Mesen /novideo /noaudio /noinput
If this works, the problem is most likely directx related
I've been using Mesen for my development more lately and have been keeping a list of issues and wants as I encounter them. Most of this is from version 0.9.7.37. Apologies if any of these are already known or fixed. (Also, FYI, I did wind up finding a good use case for sub-instruction breakpoints in the form of DMC DMA debugging. Was really useful!)
Bugs:
- OAM decay emulation seems to have an issue with 32-bit signed integer overflow. While developing my decay tests, I found that the emulator would break and report decay on cycle 2147480649.
- I think DMC DMA timings when occurring on write cycles aren't quite correct. When DMC DMA lands on the first of two consecutive write cycles, it should take 4 cycles (and be delayed by 2 cycles), but I think Mesen is taking 3.
I posted a test ROM that fails in Mesen because of this.
- Disabling sub-instruction breakpoints prevents DMC DMA reads from triggering breakpoints. I don't know if this is something you intended, but it caused me confusion when I was trying to break on these and it wasn't working on 0.9.7.37.
Things I found somewhat confusing or annoying:
- Resetting while paused will unpause if the debugger is open. I remember this sort of thing being difficult for you to improve when I last made suggestions around the debugger and pausing, and I don't remember if this was something you decided was unreasonably invasive to fix. Not a huge deal, but I often do this in FCEUX to start debugging from first instruction, and it's more convenient than going through the menus to enable/disable "Break on power/reset".
- Selecting "Edit in Memory Viewer" in the debugger takes me to CPU memory, but I would expect PRG ROM, instead, like FCEUX. I'm not even sure what behavior to expect when editing ROM in CPU memory.
- When the emulator isn't paused, the selected byte in the Memory Tools loses its highlight when the window isn't selected, which makes it hard to keep track of the byte when interacting with the emulator or debugger windows to see how its value changes.
- Highlighting a field in the debugger, such as the cycle count, and copying with ctrl+C doesn't work; the currently selected instruction is copied, instead. I was manually transcribing these until I realized I could highlight and right click to copy.
- When creating a breakpoint by clicking in the code view in a 24 KB mapper 0 ROM on an address $C000-FFFF, the breakpoint claims to be on an address $8000-BFFF.
- There's no way to see which saved state slots are actually filled so I know which to save to or can load from, especially since trying to load a state that doesn't exist hangs the emulator for several seconds.
Improvements I'd love to see:
- I really need to be able to load a saved state even after modifying a ROM to enable fast development. I know I've mentioned this one before, but it remains the biggest hurdle for me when using Mesen.
- When DMA occurs within an instruction, I don't think there's a way to tell which cycles belong to the instruction and which belong to the DMA (or it's not clear).
- There are options for stepping by PPU cycle, scanline, frame, and instruction, but not CPU cycle. I see I can do this with the 'Break in...' menu, but with sub-instruction debugging, it would be useful to have this more conveniently accessible.
- Can you add a hotkey for "Go To... Program Counter"? FCEUX has a handy button, while in Mesen, I need to either hit ctrl+G and type in the PC value or go Search -> Go To... -> Program Counter, neither of which are very convenient.
- A highlight on the currently-hovered tile in the Nametable Viewer and Sprite Viewer like in the CHR Viewer (although perhaps without the click-highlight behavior; there doesn't seem to be a way to not have some tile be selected in the CHR Viewer).
- A right-click option for the Nametable Viewer to add a breakpoint on the selected tile.
- A right-click option for the Event Viewer to add a breakpoint on the selected dot.
- A breakpoint condition for an instruction being jumped to (via branch, jmp, jsr, rts), or the previous PC being outside some range, would be extremely useful.
- Breakpoints on PPU state change, such as various flags flipping (sprite 0 hit, vblank, etc). The debugger makes the state visible, but I don't see a way to set breakpoints on it.
- It'd be really nice to be able to see some more internal state, like 'this is a write cycle'.
- Show the current dot in the Event Viewer when snapped. Currently, it only shows the current scanline.
Nice-to-haves:
- An option in the Sprite Viewer to see outlines showing the positions of all of the sprites in the Screen Preview.
- An option in the Memory Tools to highlight the row (xxx0) and column (0x) indicators for the currently selected byte would probably make it a bit easier to track which byte I'm on.
- A way to see the values of the PPU's t/v/x/w/flags registers in the event viewer across each dot on a given frame. It'd be helpful and informative to mouse around and see how this stuff is changing dot-to-dot, how writes impact the registers, and so on. I can dig into this with the debugger, but not nearly as easily.
Despite the length of this list, I've been really enjoying this emulator and its tools and really appreciate all the work you've put into it. It's been extremely useful for the projects I've been doing. Thanks!
Fiskbit wrote:
There's no way to see which saved state slots are actually filled so I know which to save to or can load from, especially since trying to load a state that doesn't exist hangs the emulator for several seconds.
The emulator shouldn't be hanging on loading a save state that doesn't exist - are you sure it's not a matter of trying to load a state for an older version of the ROM? In this case, Mesen will search through your NES files to try and find a matching ROM, which can take a while if you have a huge rom collection.
Fiskbit wrote:
I really need to be able to load a saved state even after modifying a ROM to enable fast development. I know I've mentioned this one before, but it remains the biggest hurdle for me when using Mesen.
There's been an option for this for a while now (Preferences->Save Data->Allow save states to be loaded on modified roms), does it not work in this scenario? (This should also fix the freeze problem with load states, I think)
Fiskbit wrote:
Can you add a hotkey for "Go To... Program Counter"? FCEUX has a handy button, while in Mesen, I need to either hit ctrl+G and type in the PC value or go Search -> Go To... -> Program Counter, neither of which are very convenient.
There is one, but it's called "Show Next Statement" (Alt+* by default, iirc), for historical reasons (and because that's how it's called in VS)
Fiskbit wrote:
It'd be really nice to be able to see some more internal state, like 'this is a write cycle'.
The new "show instruction progress" feature does tell you whether you're on a exec, read, write, dummy read or dummy write cycle. Was there anything else you wanted to see specifically?
Fiskbit wrote:
(although perhaps without the click-highlight behavior; there doesn't seem to be a way to not have some tile be selected in the CHR Viewer)
This is mostly because the tile can be selected to edit it on the right-hand side, though I guess highlighting the tile on the left side isn't entirely necessary.
Thanks for the list! Most of it should be relatively easy to fix, I'll let you know if I have any questions.
Sour wrote:
The emulator shouldn't be hanging on loading a save state that doesn't exist - are you sure it's not a matter of trying to load a state for an older version of the ROM? In this case, Mesen will search through your NES files to try and find a matching ROM, which can take a while if you have a huge rom collection.
Ah, you're absolutely right; it's hanging only when loading a state that doesn't match the current ROM. The search seems pretty clunky, though; it plays the current audio repeatedly and then speeds up afterward to compensate for the lost time. If it finds another matching ROM, does it automatically load that to work with the state, or what? This strikes me as surprising behavior, though I imagine I won't be seeing it anymore now that I know about this:
Sour wrote:
There's been an option for this for a while now (Preferences->Save Data->Allow save states to be loaded on modified roms), does it not work in this scenario? (This should also fix the freeze problem with load states, I think)
I didn't realize that had made it in as a setting! Yes, this solves my problem. Thanks!
Sour wrote:
There is one, but it's called "Show Next Statement" (Alt+* by default, iirc), for historical reasons (and because that's how it's called in VS)
Ah, OK. Was going to comment on the hotkey (requires numpad multiply, which is very inconvenient on my keyboard), but just found the shortcut configuration menu, which solved my problem. Perfect!
Sour wrote:
The new "show instruction progress" feature does tell you whether you're on a exec, read, write, dummy read or dummy write cycle. Was there anything else you wanted to see specifically?
You're right, that should cover this.
Sour wrote:
This is mostly because the tile can be selected to edit it on the right-hand side, though I guess highlighting the tile on the left side isn't entirely necessary.
Oh, I didn't realize that was editable. The persistent highlight makes sense, then, and is probably good to have, but I do wish I could have either a way to make it go away or obstruct the graphic less, since it's really hard to see what the tile is behind the highlight. As far as the interface goes, I'm not sure how I'd make it go away when not needed, but maybe using an outline, instead, would be better and might also make it easier to see how the tile fits in with neighboring ones while editing.
Thanks for looking into these!
Fiskbit wrote:
The search seems pretty clunky, though; it plays the current audio repeatedly and then speeds up afterward to compensate for the lost time. If it finds another matching ROM, does it automatically load that to work with the state, or what? This strikes me as surprising behavior
Yes - this behavior exists because you can open a save state file directly and have Mesen load the corresponding game. If it's freezing for a few seconds, you likely have several hundreds/thousands ROMs, which is not really a typical scenario.
I've done some changes (halfway done or so):
-Current byte in hex editor is always highlighted
-Right-click on nametable can put a breakpoint (on a "nametable ram" location, which is new, too)
-DMC reads will now trigger breakpoints (but this will trigger in the middle of the instruction, even if sub-instruction breakpoints are disabled (because otherwise this would require predicting the next DMC read ahead of time, which implies predicting the APU's behavior, which is a lot of trouble/CPU time)
-Copy action on text boxes should be fixed
-The "read" cycle for the DMC will now show up as "DMC" in the instruction progress. Stall cycles aren't marked, yet, though.
-Break on decayed oam read option should no longer trigger when cpu cycle counter reaches 2^31
-Added an option to the hex editor to highlight the current column/row
Fiskbit wrote:
- When creating a breakpoint by clicking in the code view in a 24 KB mapper 0 ROM on an address $C000-FFFF, the breakpoint claims to be on an address $8000-BFFF.
If you mean the breakpoint list is displaying the "wrong" address, this is normal in this case. The breakpoint is added to the real offset in PRG ROM, and then the matching (current) CPU address for that breakpoint is calculated as needed - several addresses can match, when mirrors exist, and only the first one will be shown. e.g, on a 8kb PRG ROM game, putting a breakpoint on $E500 might show "$8500 [$0500]" as the address, the first one being the current (calculated) CPU address (for the first mirror) and the 2nd one is the offset in PRG ROM (which is where the breakpoint is actually set). It doesn't know that you put the breakpoint on $E500 specifically, it'll break when any of the mirrors is executed/read/written.
Thanks for the details, and I look forward to trying out those improvements!
I encountered another issue tonight. I made a version of one of my tests that uses mapper 218, and it needs to be 8 KB. Trying to load this 8 KB ROM with an NES 2.0 header specifying 8 KB through mapper byte 9 being $0F and byte 4 being $34 (001101 00, or 2^13 * 1 = 8 KB) results in Mesen seriously misbehaving (doesn't load the ROM, has high CPU use, won't load other ROMs, hangs on close). I get the same behavior if I lie and just mark the ROM as being 16 KB with a normal iNES header (with it still only being 8 KB). Using a 16 KB version with that iNES header works fine.
Even if Mesen doesn't support this ROM size right now (and assuming I did the NES 2.0 sizing correctly...), it should still probably fail more gracefully. Testing was done on Mesen 0.9.7.48, and I've attached a ROM that reproduces the issue.
Sour wrote:
You mean it did something like this? (this code makes no sense, but just for illustrative purposes)
Code:
mylabel:
LDA #$00
LDX #$00
BNE mylabel+2
LDY #$00
BNE mylabel+2
Where "mylabel+2" would normally be a branch to the 2nd line (the "LDX #$00" instruction)
In that case, it's probably ok, I think. This isn't likely to happen (and it's not actually giving false information, either). Multi-byte labels in PRG ROM are useful for hardcoded byte arrays, etc., so I can't really disable them altogether.
Internally, the debugger is creating multiple labels (e.g: mylabel+0, mylabel+1, mylabel+2, mylabel+3), but I made it so only the first label is shown, and without displaying the "+0", which is why it still says "mylabel:" rather than "mylabel+0:" on the first line. Otherwise the display for non-disassembled data arrays was a mess, e.g:
Code:
mydata+0:
.db $00
mydata+1:
.db $00
mydata+2:
.db $00
mydata+3:
.db $00
Instead of just:
Code:
mydata:
.db $00 $00 $00 $00
Yes Sour, that is exactly what I meant!
Your wisdom and change have made it so much better, for me at least. Now that function name, after assigning a length of 2, doesn't have a "+0" at the end while my p43 still has a "+0" for RAM location $0043! Thank you so much!
Looks like my problem with the 8 KB ROM was that I had failed to set the NES 2.0 identifier bit, so it was interpreting the PRG size as being very large. Setting the identifier correctly allows it to load properly. I'd guess Mesen is misbehaving if the PRG is smaller than advertised.
Yes, the code that loads the ROM will end up in an endless loop if the file is smaller than 16kb (because that's the smallest PRG size that can be defined in iNES 1.0), will get it fixed soon.
In other news, I've been trying to rewrite my DMC/OAM DMA logic based on
this thread (since conceptually it seems to make a lot of sense), but I haven't managed to find a solution that makes blargg's test pass (the one that measures the time taken by a sprite DMA when a DMC DMA happens in the middle). I do have a "fix" for your test rom, but I'm not convinced that the DMA behavior in Mesen is 100% accurate in the first place (it's a decent approximation at best), so if possible I'd like to fix it properly rather than tackle an extra patch on top of it.
I'll have to see if I can get that ROM to pass on Visual NES first, which could at least help confirm or invalidate what's written in that thread (but the test will take over an hour to run, so it'll have to wait until tomorrow)
And after 1+ hour of running that simulation at a whopping 2 frames per minute, the result isn't excessively useful:
Attachment:
oamdma.png [ 5.99 KiB | Viewed 2658 times ]
I was hoping the delays might match, and they are kind of close, but not perfect (and I don't know what to make of the last 2 values). So, I'm not sure how reliable the Visual 2A03 is for this (although it could also be specific to Visual NES, but I have no simple way of running this test on the original JS simulators to check, not to mention it would take 10+ hours to run)
I was going to take a look at Bizhawk's implementation of this, but it looks like Bizhawk also doesn't pass your sync test, so that doesn't help, either. Might have to put this particular issue on the "try to figure this out someday" pile, unfortunately.
@Fiskbit, found that you can get Mesen to run 1 CPU cycle by pressing ctrl+b to bring up the "Break In..." window and then pressing enter to select "OK".
It's really easy.
tl;dr: Run 1 CPU cycle via: ctrl+b, enter
Sour, for some reason, maybe I changed something, when holding left or right before reset... and then resetting... the input becomes like nothing is pressed. After releasing the direction held our character moves in the opposite direction. I bet this is my fault, but I can't find out how to rectify this weirdness. Do you have an idea of what I could have changed to cause this?
This is something I must have done to try to combat a bug that used to be in our game. That bug must have vanished and now Mesen input can be weird. Just found that our game doesn't have this problem when running in FCEUX 2.2.3.
What do you see when you display the input on the screen? (Options->Input->Advanced) As far as I can tell, everything seems to be working properly on my end (the held button stays pressed and works as expected after a reset, without ever letting it go)
unregistered wrote:
found that you can get Mesen to run 1 CPU cycle by pressing ctrl+b to bring up the "Break In...
While this works, it's not ideal from a usability perspective, but most of all, it's a bit misleading. Currently the "break in X CPU cycles" option works by breaking X*3 PPU cycles later (which isn't the quite the same). Internally, the debugger can currently only break at the start of X CPU instructions or X PPU cycles - would need to change this to also support X CPU cycles.
I may have found a very small inconsistency in the way that Mesen (and apparently other emulators) defaults attribute data relative to the original NES. I was able to test out a game, which I'm currently developing, on original hardware a few nights ago and noticed that my intro screen did not look right. The intro screen was apparently using the attributes from my main game-screen rather than defaulting to all zeros as it had been in Mesen. I was expecting all attribute data to be zero since I was planning on using palette 0 for the entire intro screen. This is largely "programmer-user-error" since I probably should have initialized the attributes for my intro screen (even if they were all zeros), but I just wanted to let you know in case it was something that you were interested in modifying.
Here are the details. I have an NROM game with essentially two screens and vertical mirroring. In my initialization code, I first initialized the palettes. Then, I initialized the attribute data starting at $23C0. I did NOT initialize any attribute data starting at $27C0. Next, I set the PPUCTRL flag to #%10000001 so that the PPU is pointing at the nametable in $2400. Finally, I loaded the intro screen nametable data into $2400 and then the game-screen nametable data into $2000. I know it's kind of backwards (you'd expect the intro screen in $2000) but that's just how it was.
In conclusion, in my example, Mesen appeared to have defaulted the attributes in $27C0 to all zeros because I did not supply it with any data. However, the NES appears to have used the attributes from $23C0 when it displayed the nametable data in $2400. Alternatively the NES could have copied the $23C0 data into $27C0 somehow. Not sure.
I hope you find this helpful.
Do you have Mesen set to randomize things on power-on state? Under options -> emulation settings-> advanced there's some options that are helpful to test if you're doing development:
- Enable OAM RAM decay
- Randomize power-on state for mappers
- Default power on state for Ram: Random Values
These will help you test some of those issues with things not be initialized correctly. (which may or may not answer your question, I'm not sure)
gauauu wrote:
Do you have Mesen set to randomize things on power-on state? Under options -> emulation settings-> advanced
Ah, that may solve it. Will check tonight. Thanks, @gauauu!
Edit: I guess the weird thing on the NES was that it didn’t appear random but rather equal to the other attribute data. Either way, randomizing would have helped me catch this in advance.
Sour wrote:
What do you see when you display the input on the screen? (Options->Input->Advanced)
The displayed input on screen helped me to know that the opposite direction is pressed. I have spent around two days on this... stepping through reads from $4016 and can guarantee that that register returns 00 if left or right is pressed
and held before reset during input. And then, I think it inverts the lowest two bits
after that direction is released and it must clear that bit if that initial direction is pressed again, after every direction pressed, beyond the initial direction press released. But, maybe not exactly like that bc pressing the same direction again halts her.
Sour wrote:
As far as I can tell, everything seems to be working properly on my end (the held button stays pressed and works as expected after a reset, without ever letting it go)
Yes, that's exactly what happens in FCEUX 2.2.3. I must have changed something in Mesen's settings.xml file or another file... hmm...
revision.final revision.p.s. Sour, is controller 1's input displayed on screen directly affected by values read from $4016? I doubt it is, but if it is that would make sense and provide less $4016 checking for me in the future.
Skadiet wrote:
Edit: I guess the weird thing on the NES was that it didn’t appear random but rather equal to the other attribute data. Either way, randomizing would have helped me catch this in advance.
Yeah, that's why I'm a little unsure about whether this will help you or not. It's possible something else is happening.
Umm... so I copied the Mesen exe file to a new Mesen folder on a different drive and the Mesen initialization part ran and folders were created... and our game file that I pasted into the new Mesen folder experiences the same problems. However, when running in FCEUX 2.2.3 it always starts without problems. So it must have something to do with Mesen's code... I can't post the .nes file so it's ok, with me at least, if you don't fix this.
edit: hmm... maybe my setting the "new" Mesen up in a similar fashion to my original Mesen has something to do with this? Just enabled the Preferences>Advanced>"Disable built-in game database", set up the standard controller's input to work with my nes controller attatched to my CPU via usb, and added the Pause key to also pause the game with Esc.
unregistered wrote:
p.s. Sour, is controller 1's input displayed on screen directly affected by values read from $4016? I doubt it is, but if it is that would make sense and provide less $4016 checking for me in the future. :)
The overlay shows the state of the controller when the input was polled by the emulator (on scanline 241) - it will update itself regardless of whether or not you read $4016, though.
And your issue is caused by the fact you're (I'm assuming) using a DirectInput gamepad. When the game is reset, Mesen tries to find new gamepads, and since the DirectInput API is pretty horrible (or because I don't know it in depth, either of these!), it has to "guess" what the default state for the gamepad is, and since you're holding down a button, it assumes that that's the default state for that button. Just tested on a 8bitdo SNES controller and I have the same issue. I can probably fix it, though. But for the record, this shouldn't happen with the keyboard or XInput devices (e.g xbox controllers)
Skadiet wrote:
However, the NES appears to have used the attributes from $23C0 when it displayed the nametable data in $2400.
The nametable RAM should be somewhat random on power on, but it will keep those values between resets. Were you testing from a power on each time? Either case, though, it's likely to just be a coincidence.. there's no reason parts of $2000 would be automatically copied to $2400 (unless the mirroring isn't wired up properly, for example)
Like gauauu said, keeping on all the options for randomized state/ram/oam decay should help you avoid initialization-related issues on hardware, though.
Sour, glad you discovered the exact problem
...thought this didn't happen in a past Mesen, but just attempted with every old Mesen I have and the "problem" has always existed, but you know that (I didn't).
edit: It would have been much better if I hadn't asserted you know about that "problem" always existing... Mesen is so complex to imagine, for me at least, and bugs can go unnoticed for a long time. I'm sorry about my assertation Sour.
Maybe you could copy whatever state the buttons are initialized too to the "nes"... then just invert the relevant initial state when detecting a button change...
Fiskbit wrote:
I've been using Mesen for my development more lately and have been keeping a list of issues and wants as I encounter them.
[...]
Some more additions:
-Nametable/Sprites viewers now have a "selected tile" overlay like the CHR viewer (along with a new overlay displaying information about the tile, similar to what is shown on the right - this is mostly meant for the new "compact" view for the PPU windows)
-Added a "Display outline around all sprites in preview" option to the sprite viewer
-Added a number of new flags/values that can be used in the watch/breakpoints: PreviousOpPc, Sprite0Hit, SpriteOverflow, VerticalBlank, Branched
These should let you do most of the things you were asked for. e.g to break on a sprite 0 hit, you can create a CPU breakpoint on "any address" and set "sprite0hit" as its condition (you'll need to disable the breakpoint after it first hits, otherwise it'll keep hitting on every instruction until the end of the frame, though)
PreviousOpPc is the value of the previous instruction's PC
Branched is true when the current instruction was reached by a branch (jmp, bpl, jsr, etc.)
This isn't as simple as a "Break on Sprite 0 Hit" option in the debugger, but the "Break on ..." is starting to be rather long, so I'm trying to avoid adding every single option possible under there, if possible.
unregistered wrote:
Sour, glad you discovered the exact problem :D ...thought this didn't happen in a past Mesen, but just attempted with every old Mesen I have and the "problem" has always existed, but you know that (I didn't). :)
edit: It would have been much better if I hadn't asserted you know about that "problem" always existing... Mesen is so complex to imagine, for me at least, and bugs can go unnoticed for a long time. I'm sorry about my assertation Sour. :(
This should be fixed in the latest appveyor build, I've tweaked a number of things with the directinput logic to fix this issue (and hopefully another issue that had been posted on GitHub)
And no need to apologize! (I hadn't even noticed your edit until now..)
^Thank you so much Sour! This works extremely great now! I'm so happy!
In the current build, I notice when I'm in a procedure (foo) all the local branch and jump instructions end up changing the address into the procedure name with an offset (foo + 85). This makes it kind of impossible to understand where the branch is going to go to. I can't really count 85 bytes into a function to figure that out.
I guess these should show an address if they don't point to an actual label.
I am guessing this is an accident from trying to show +1 etc. for multibyte variable/array access, though maybe it would be nice if there was an option to always append the target address (for all instructions, not just JMP/Bxx) when a label has replaced it, maybe in a similar apperance to how calculated address already show for indexed instructions.
That's odd - is this a CA65 project? It looks like the label for the function was incorrectly imported as a multi-byte label when it shouldn't have. Looking at my usual CA65 project, it looks like this sometimes does occur, not quite sure why/when at first glance - will have to check when I have a chance.
It should be fixed as of the latest commit - looks like the ".scope" and ".proc" statements were throwing off the logic I was using and ended up creating massive multi-byte labels for the entire block (which could end up overwriting all the other labels contained inside of it, etc.)
Let me know if you're still getting the problem on your end.
Looks like it's fixed, thanks!
Noticing a lot of the ones that were a problem were CC65 generated labels (i.e. L###) that were masked by the offset label that was replacing them. I dunno if it was just from C generated code, or if all .proc was subject to this problem. Oh well, seems good now!
One thing I'm wondering is if there's any easy way to "reload" the automatically loaded DBG files? It seems I can "reset labels" but that clears all of them out, and the only way I can think to get them back is either to use the import to manually find the file and import it, or reopen Mesen?
One thing I've been wondering... there is a watch list in the debugger, which updates whenever execution breaks. Being able to use symbols for this is great.
There is also the memory view, which updates continually, but you can't put together a list of things you want to inspect there, nor can you search/use symbolic names for anything.
Is there anything equivalent to the "Ram Watch" view that FCEUX/Bizhawk has that would let you look at a custom set of data updated once per frame? Like say I want to keep an eye on 3 different values that are at different places in memory, but while playing and not having to break on every frame.
I guess ideally it would just be a separate window version of the debugger's watch panel that just updates more often.
rainwarrior wrote:
One thing I'm wondering is if there's any easy way to "reload" the automatically loaded DBG files?
Power cycling should reload the .dbg, but only if the timestamp on it has changed (originally did this to speed up power cycling, but it doesn't behave properly if you manually delete the labels and then do this to try to reload them). Closing all debugger windows & reopening them should also reload the .dbg file.
You can set the watch window to update every frame (Options -> Refresh UI while running), actually. You can search the memory tools using symbols by using the new "Go To All" feature (Ctrl+,), from there you can right-click on the byte and do "Add to watch".
Sour wrote:
You can set the watch window to update every frame (Options -> Refresh UI while running), actually. You can search the memory tools using symbols by using the new "Go To All" feature (Ctrl+,), from there you can right-click on the byte and do "Add to watch".
Ah, Go To All is good! Telling the watch window to update helps, but it would be really useful to have a dedicated watch window.
Currently there is only space for a few lines in it, and I can't expand it vertically very much. The sizing seems to restrict it so that the disassembly panel has to be a minimum size above it.
So... what I'd request is a dedicated watch window, maybe duplicating what's inside the debugger, but separate so it can be taller to fit more stuff. Other options: select unsigned, signed, hex display per-line instead of just the two global options there is now. Being able to save and load the current watch list is also very good for switching between tasks. Also some ability to rearrange the watch list would help organize it.
I'm basically just describing FCEUX or BizHawk's "Ram Watch" window, if you want something to compare against. Though the integration with symbolic names and expressions makes Mesen's a lot more powerful... if I could fit more lines into it.
The height is limited by the console status portion (which can't be smaller than X otherwise the contents would be cut off).
Adding a standalone watch window shouldn't be too hard, but I've also been pondering rewriting the watch window with a custom control recently because it's plagued with usability issues, and trying to fix one always seems to cause another. The built-in inline edit feature of the C# listview isn't great, and trying to add a way to reorder the items would probably end up being hard to use at best.
I'll take a look into rewriting that and making it into a standalone window when I get a chance.
Sour wrote:
The built-in inline edit feature of the C# listview isn't great, and trying to add a way to reorder the items would probably end up being hard to use at best.
If I could right click on an entry and choose "move up" or "move down" that would be enough. It's just to prevent having to recreate the whole list to accommodate adding a new one anywhere but the bottom. (I don't think something fancier like drag and drop is necessary. FCEUX just does it with two arrow buttons. Bizhawk has two buttons and a right click menu.)
Are breakpoints/watch entries supposed to persist when I close and reopen Mesen? They always seem to clear for me, and I could swear that didn't happen before...
I thought it might be the "Workspace > Import settings > Reset workspace" but that doesn't seem to affect it. If I reload the ROM with Ctrl+T the DBG gets correctly reimported and all the symbols update, but the watch/breakpoint persists. They only seem to disappear when I close the program.
Also, while I'm here, is there any way to attach a name/note to a breakpoint? It's usually pretty hard to remember them just by address or order in the list.
(Using 9.7.79)
Breakpoints and watchpoints used to work when exiting/relaunching Mesen (or closing/opening a ROM). If that's not working now, that sounds like a bug.
Edit: Yeah, they seem missing for me as well in the existing project I have, using Mesen Dev-0.9.7.72. Damn, I had about 10-12 breakpoints in there too. :\
rainwarrior wrote:
Are breakpoints/watch entries supposed to persist when I close and reopen Mesen?
Yes, that was a bug - noticed that earlier myself, it should be fixed now (let me know if you still have issues with the next appveyor build)
Also, a few watch-related changes:
-Improved watch window usability - should now be easier to add new entries, less odd behavior like what gauauu reported a few days ago where starting to type with a "{" caused issues, etc.
-Added Move up/down right-click options (and shortcuts)
-Added Import/Export right-click options
-Fixed an issue with expression parsing and improved expression validity checks
-Added support for binary values in expressions (% prefix)
-Added a standalone "watch window" that is a copy of the watch (and synced with it), but can be resized/moved around independently from the rest of the debugger.
The main thing left now would be the ability to select the way to output each expression independently (hex, signed, binary, etc.). A right-click menu for this seems like it might be a bit annoying to manage. What do you think of having a prefix for it? e.g:
D:[$00] to display the result of "[$00]" as an unsigned decimal value
S:[$00] to display the result of "[$00]" as a signed decimal
B:[$00] to display the result of "[$00]" as a binary expression (%10101...)
H:[$00] to display the result of "[$00]" as a hex string
Would that be intuitive/easy to manage? I would also add binary/signed to the right-click options to select a default view for all expressions, like it currently does for the hex view (so the prefixes would be entirely optional)
koitsu wrote:
Edit: Yeah, they seem missing for me as well in the existing project I have, using Mesen Dev-0.9.7.72. Damn, I had about 10-12 breakpoints in there too. :\
Sorry!
Edit:
rainwarrior wrote:
Also, while I'm here, is there any way to attach a name/note to a breakpoint? It's usually pretty hard to remember them just by address or order in the list.
Had missed that. The only way to give a breakpoint a name of sorts at the moment is to label the corresponding memory address, which is probably not ideal. I could add an optional name/description field in the UI and have it display in the breakpoint list/etc easily enough, though.
Sour wrote:
The main thing left now would be the ability to select the way to output each expression independently (hex, signed, binary, etc.). A right-click menu for this seems like it might be a bit annoying to manage. What do you think of having a prefix for it? e.g:
D:[$00] to display the result of "[$00]" as an unsigned decimal value
S:[$00] to display the result of "[$00]" as a signed decimal
B:[$00] to display the result of "[$00]" as a binary expression (%10101...)
H:[$00] to display the result of "[$00]" as a hex string
Or draw from the likelihood that debugger users will be familiar with printf format strings: %u[$00], %d[$00], %b[$00], %x[$00]
(N.B. %b does not exist in standard C.)
or %c[$00] to display as a character in the current codepage
Ah, thanks! Currently in the middle of some testing but I will check the updated build soon.
As for the prefix thoughts, I think maybe also add an optional number of bytes to the prefix (S2 S4 etc.). It becomes important with 24 or 32 bit types, especially with signed representation, though also with hex it would be good to be able to specify number of digits. Currently hex seems to just expand to cover the size of the value, and automatically pads with extra 0s if {} is involved?
Like I've been using expressions to make 24 bit types out of [] and {}, but signed representation was a feature I couldn't kludge together. (Sometimes just typed stuff like "256-[var]" and had "[var]" next to it to kinda look at both at once, but it was awkward.)
tepples wrote:
%c[$00] to display as a character in the current codepage
This wouldn't look so clean if you were using expressions other than just []. It can't be a modifier on a single term, it has to apply to the whole expression. e.g. "%c[var]+25"? I think the colon idea does a good job of visually making the division clear.
A few thoughts on the movie feature, and the event viewer:
Using the movie recording feature I've noticed that if I use randomized RAM, the movie will be desynched when playing back if the game depends on starting RAM values. (e.g. the golf game I'm working on randomly generates a completely different set of holes, so the movie is useless for that).
There seems to be no UI for movie playback, so it's hard to figure out things like how long it's been playing or how long it has to go, or whether I've accidentally cancelled playback with other input, etc. In particular I'd probably like to be able to display:
- If the movie is in progress, or finished
- What frame of the movie I'm on
- How many frames are in the movie total
Additionally I'd probably want to be able to automatically pause as soon as the movie ends, both to know that it's stopped, but also to prevent from accidentally doing unintended input after it completes.
The "history viewer" is great, though maybe it could be combined nicely with the movie feature? Like if I loaded a movie to play back, maybe it could load into the event viewer, which I could use to preview it and jump around before I start using the movie to debug something or make a video recording, etc. (I guess it would take a long time to re-render the cache of history from the movie though...)
The event viewer could really use the ability to advance/rewind frame by frame, I think, and to see a frame count (not just minutes/seconds). I love the preview, since I can leave the game paused while I peruse the history without destroying the current state. The scaling looks really weird in this window though, no matter what size I make it there are weird artifacts that look like the image was rendered at some smaller resolution than 256x240 and then nearest-neighbour rescaled up?
The export menu mentions "segment", is it possible to capture a smaller piece of a movie than the whole thing? Does it automatically start a new segment after 300 minutes or something? I would assume a movie that doesn't start from power-on would have a stavestate up front, which would fix that desynch issue, but maybe that would be appropriate for all movies, not just ones that start from the middle)
At the moment I really wanted to just save a movie file so I could replay a thing that just happened, and then capture an AVI of it. Unfortunately it's desynched, but it's what got me thinking about the lack of movie playback UI, 'cause it's a 30 minute movie and I have no way to really skip forward in it, or know how far along it is other than watching it (which is why I think it might be neat if a played movie would just "load" into the history viewer).
Oh, one more thing, if I close Mesen while an active AVI recording is still happening it will crash.
Yea, movies will desync on random RAM - it's been a "todo" in the code for a long time. Most movie-related features/improvements are more or less pending me getting to building TAS tools eventually..
There's already an option (enabled by default) to pause when the movie ends, and I'm pretty sure there's an option to show a play button while a movie is playing (both are in the preferences' general tab iirc).
Loading a movie into the history viewer is actually something that's been on my list ever since I added the history viewer.
The segments are created when state is loaded by the user, it's not related to the rewind limit (although this probably breaks in some fashion if you do reach the rewind limit). Creating a smaller segment for export is possible, just a matter of having a UI for it. IIRC, all history viewer movies currently contain a save state, even those starting from power on.
Some of this shouldn't be too hard to fix, I'll take a look.
About the scaling issue in the history viewer, are you at a DPI setting other than 100% in windows? That could be the cause.
Sour wrote:
About the scaling issue in the history viewer, are you at a DPI setting other than 100% in windows? That could be the cause.
No, it's at 100%. I see this the same on two different machines, one is Win7, one is Win10.
It's kind of weird, seems to be intermittent behaviour... I've attached a screenshot (
ROM) where you can see that it looks like it was rescaled once at some lower resolution, and now rescaled again to the window size (nearest-neighbour both times)
Playing with it, it seems to be duplicatable when I start playing something in the history viewer at small size, hit pause, and then resize it to be a larger window... so maybe it's just the event viewer not redrawing when it's paused... which is maybe not a real problem. I thought I was seeing it while it was playing too, though, but I may have been wrong, maybe I was just seeing the 1 intended layer of scaling artifacts at that point.
Though it would be nice if I could snap/set the event viewer to 1x/2x/3x size. I suppose the real issue that I'm dealing with here is I just can't get a clean view of the event viewer because of that last nearest-neighbour upscale to the window size. I've tried really hard to manually size the video to 1x or 2x but can't ever seem to find the right spot? (Almost feels like the correct integer is somehow inaccessible... that can't be true, can it?)
Sour wrote:
Yea, movies will desync on random RAM ... IIRC, all history viewer movies currently contain a save state, even those starting from power on.
Oh, that's interesting... does the RAM randomization apply after the savestate, replacing its contents? Would my existing desynching movies be able to work after the fix, or do they not have the needed information in the savestate?
Sour wrote:
There's already an option (enabled by default) to pause when the movie ends, and I'm pretty sure there's an option to show a play button while a movie is playing (both are in the preferences' general tab iirc).
Ah, I see both. Somehow I missed them when I went looking, sorry.
Sour wrote:
The segments are created when state is loaded by the user, it's not related to the rewind limit (although this probably breaks in some fashion if you do reach the rewind limit). Creating a smaller segment for export is possible, just a matter of having a UI for it.
That makes me think it would be really great to be able to "export savetate" from the history viewer, too. Though I suppose this is already possible by saving movie, playing it back and making savestates during playback... as long as the movie doesn't desync
Following up on the other changes:
- Breakpoints/watch being erased on close seems fixed.
- New watch window is awesome!
- I like that it's shared with the debugger instead of two separate instances.)
- Watch up/down works well.
- Watch import and export too! Nice!
- Binary number literal prefix is interesting... makes me realize I never listed binary as a display option for values.
- The ? help button on the watch panel in the debugger is great for quick reference, though it'd be nice to have somewhere in the watch window too.
This is all very nice! Dunno what was fixed about expression evaluation but I'm sure it's for the good.
The screen's HUD scales along with the picture when resizing occurs while paused, the main window does this, too. It's been like this for a very long time, I vaguely recall it being annoying to fix, but I could be wrong.
I'll look into adding scaling shortcuts for the history viewer (and checking to ensure it's displaying properly - off the top of my head, I thought it was meant to open to be the same scale as the main window itself, but I could be remembering incorrectly)
rainwarrior wrote:
Oh, that's interesting... does the RAM randomization apply after the savestate, replacing its contents?
It shouldn't - was the video that was desyncing taken using the history viewer? If so, I would have expected it to work, but it's possible there's a problem somewhere w/ regards to random RAM. RAM randomization is applied at power on only, so loading a movie should be doing Power On -> Randomize -> Load state (and overwrite all that randomization with what is in the state instead)
rainwarrior wrote:
That makes me think it would be really great to be able to "export savetate" from the history viewer, too
You can also do "Resume gameplay from here" to load the state in the main window and then take a save state manually. It might be fairly easy to have an export save state option, though.
rainwarrior wrote:
The ? help button on the watch panel in the debugger is great for quick reference, though it'd be nice to have somewhere in the watch window too.
Yea, I meant to fit that in somewhere too, but in my rush to get it done, I forgot :p
rainwarrior wrote:
This is all very nice! Dunno what was fixed about expression evaluation but I'm sure it's for the good.
Mostly just silly things, like stuff such as "$$$$30", "[$00]44" being "valid", even though it's fairly unclear what they actually resolve to.
Sour wrote:
The screen's HUD scales along with the picture when resizing occurs while paused, the main window does this, too. It's been like this for a very long time, I vaguely recall it being annoying to fix, but I could be wrong.
That in itself isn't really a problem, so if it's just that, then I'd say it doesn't need to be fixed. It was just confusing in tandem with the second step of nearest-neighbour scaling to window size, I think, but if I could just get a 1x/2x/3x window size for that it wouldn't be an issue.
Sour wrote:
off the top of my head, I thought it was meant to open to be the same scale as the main window itself, but I could be remembering incorrectly)
It appears to remember the last size used, even after closing and reopening the program.
Sour wrote:
was the video that was desyncing taken using the history viewer? If so, I would have expected it to work, but it's possible there's a problem somewhere w/ regards to random RAM. RAM randomization is applied at power on only, so loading a movie should be doing Power On -> Randomize -> Load state (and overwrite all that randomization with what is in the state instead)
Yes, the video was made with history viewer, and checking in the debugger it appears to initialize all RAM to 0. (Random RAM was the setting used while playing.) Maybe the save state gets taken before the randomization happened?
Sour wrote:
rainwarrior wrote:
That makes me think it would be really great to be able to "export savetate" from the history viewer, too
You can also do "Resume gameplay from here" to load the state in the main window and then take a save state manually. It might be fairly easy to have an export save state option, though.
The resume gameplay feature is very good, but what I meant was being able to pick a state from the middle without destroying everything that happens after it would be very useful too. Like I found myself thinking hard about whether I wanted to lose a bunch of stuff to look at an older event or not. (Though the main problem was just the movie save having failed.)
rainwarrior wrote:
Yes, the video was made with history viewer, and checking in the debugger it appears to initialize all RAM to 0. (Random RAM was the setting used while playing.) Maybe the save state gets taken before the randomization happened?
Turns out I was just remembering things incorrectly - originally history viewer movies all had save states while I was in the middle of coding it, but I removed the save state from the first segment if segment starts from power on and the ROM does not have the battery flag set. A quick fix for now would be to force a save state if the ram setting is set to random, since movies can't properly handle that at the moment.
Sour wrote:
A quick fix for now would be to force a save state if the ram setting is set to random, since movies can't properly handle that at the moment.
Since there's 3 different settings, it's probably just be best to always save state? Otherwise the user would have to know/guess which the movie was record with.
With FCEUX there was an opposition to starting from power-on with a savestate in the format, don't know the rationale, but FM2 also has a bunch of annotation data for settings, and when the randomize RAM option was added, it included an annotation with the seed used... but IMO just using a savestate seems simpler and easier? I doubt many would care about the file size being a couple KB larger.
Tool-assisted speedrunners might prefer a separate representation for a power-up state so that they can verify that the power-up state is indeed a power-up state, not a cheated save state.
For me personally at least, the main benefit of keeping a power-on movie as stateless is that the movie can be expected to work even if I have to break save state compatibility for any reason (and in fact, if they didn't, I'd have to re-record my 240+ test rom movies every time I broke save state compatibility!). If I force all movies to have a save state, it would make breaking save state compatibility even more of a big deal than it may already be. FYI, all movies from power on have no save state, including ones with save RAM - instead the save RAM's data is included into the movie file. The only exception here is the history viewer because I can't know what the save RAM's original state was (well, I mean, I could probably just keep a backup of it)
Here's what I have for the watch at the moment:
Attachment:
watch.png [ 14.29 KiB | Viewed 7886 times ]
Letter with no number = always assume 1 byte values
Technically, anything above 4 bytes won't work because the evaluator returns a 4-byte signed int to the UI as its result. I imagine NES games don't often handle values over 32 bits anyway (and increasing it to 64-bits would conflict with some implementation details of the expressions that are hidden away by the 32-bit limit)
Decimal display is the same as before: 4-byte signed (so it's the same as prefixing S4 everywhere)
Hex is equivalent to H1
Binary is equivalent to B1
I think these are sane defaults?
Hex/Binary/Unsigned are automatically extended to fit.
Not sure how to best handle signed values if they are out of bounds. e.g What should "S1:$4FFF" output?
Sour wrote:
For me personally at least, the main benefit of keeping a power-on movie as stateless is that the movie can be expected to work even if I have to break save state compatibility for any reason
Ah, well that's a practical reason.
I don't know how you handle your savestates, but if that's off the table, I'd at least suggest that a movie contain info about the version of Mesen, and emulation settings that should affect things like mapper/RAM init. Like it'd be frustrating to have a movie that you can't get working and have no idea what combination of settings or what version of the emulator might be capable of running it otherwise. (I haven't asked about random mapper init, but it seems like it's in the same boat?)
IIRC FCEUX does a named data fields thing in FM2 so that it can add new information fields that old versions can ignore... I believe Nestopia savestates are similar, with information parcelled into packets with a 3 character name.
The types for watch look good! Does B/B1 always pad with zeroes to 8 bits? I notice you have a 9 bit value in the display there.
As for 64 bit types, I've never needed them for NES. I've definitely used 32 bit though.
I dunno what S1:$4FFF should do, maybe just give an error result? "invalid range" or something?
rainwarrior wrote:
I'd at least suggest that a movie contain info about the version of Mesen, and emulation settings that should affect things like mapper/RAM init.
It does, Mesen movies are just zip containers pretty similar to Bizhawk's, it contains a "GameSettings.txt" file with (what should be) everything that can potentially affect emulation outcome, e.g:
Code:
MesenVersion 0.9.7
MovieFormatVersion 1
GameFile Super Mario Bros. + Duck Hunt (U) [!].nes
SHA1 663DDEF71CC808C5382A3426EF6D03B5499A51C8
Region NTSC
ConsoleType NES
Controller1 StandardController
Controller2 Zapper
CpuClockRate 100
ExtraScanlinesBeforeNmi 0
ExtraScanlinesAfterNmi 0
InputPollScanline 241
DisablePpu2004Reads false
DisablePaletteRead false
DisableOamAddrBug false
UseNes101Hvc101Behavior false
EnableOamDecay false
DisablePpuReset false
ZapperDetectionRadius 0
RamPowerOnState 4294967295
Here, RamPowerOnState is actually set to "-1" in the code for random. Normally this would be either 0 or 255.
Quote:
Does B/B1 always pad with zeroes to 8 bits? I notice you have a 9 bit value in the display there.
At the moment, yes, and it'll extend 1 bit at a time as needed. Is that what you'd expect?
Quote:
I dunno what S1:$4FFF should do, maybe just give an error result? "invalid range" or something?
That's pretty much what I was thinking, too.
Sour wrote:
Mesen movies are just zip containers pretty similar to Bizhawk's, it contains a "GameSettings.txt" file with (what should be) everything that can potentially affect emulation outcome
Ah! That's neat. That's a good way to make it human readable/accessible too... as long as they know where to look.
Sour wrote:
Quote:
Does B/B1 always pad with zeroes to 8 bits? I notice you have a 9 bit value in the display there.
At the moment, yes, and it'll extend 1 bit at a time as needed. Is that what you'd expect?
Yeah I think that makes sense. The thing I was more concerned about was whether 16 or 8 bits would be padded, because with binary and hex I think it's better that the output has a constant number of digits so you can read them in columns. But for a value that's 1 bit larger than expected, I dunno, just extending it a bit at a time is probably fine, I don't think that will come up very often anyway.
Though maybe 16 bits is a lot of binary digits to stare at... maybe a separator would be useful after every 4 or 8... (e.g. C++14 allows a quote as separator)
Then again, maybe a monospace font would help too, w.r.t. stationary columns.
Anyhow, that's getting into not very important details maybe. Just being able to break this window out for more flexible display and more vertical space is fantastic already. Thanks for that!
After I started updating the documentation for this, I ended up thinking a ", [format]" suffix would be consistent with the array display format that's already available, so I changed it to that instead. I think it improves readability, too.
I added a right-click menu to change (or clear) the format (which can be used on multiple entries at once), here's what it looks like at the moment:
Attachment:
watch.png [ 25.52 KiB | Viewed 7809 times ]
(It can now be used to control how arrays are displayed too)
rainwarrior wrote:
Though maybe 16 bits is a lot of binary digits to stare at... maybe a separator would be useful after every 4 or 8...
Adding a comma every 4 bits would probably make sense (makes it easier to convert to/from hex that way, too)
Edit: This was committed and will be available on appveyor in a few minutes.
Conversation from the Grunio thread:
tokumaru wrote:
M_Tee wrote:
The row of garbage pixels at the top is intentional, and is in all of the games we've made so far. We render our picture starting one row of pixels lower. I'll have to let Łukasz chime in on the specifics, but from what I recall, it makes sprite drawing more convenient in some way.
It's because the PPU displays sprites one scanline lower than the actual value set for their Y coordinate, so if you also lower the background by 1 scanline, you get a perfect match between sprite and background coordinates, sparing you the trouble of compensating for that 1 scanline in collision checks and the like. You can probably make it look less glitchy if you set the scroll to 239 instead of 255 though, because you'd at least get valid NT data instead of AT data interpreted as if it were NT data.
Quote:
It'll definitely be covered up by overscan (even on a PAL system from what I've been told).
I was under the impression that all scanlines were visible in PAL.
Quote:
I'm actually a little disappointed that Mesen, by default, displays the PAL overscan rows even when region is set to NTSC (as it makes it difficult to suggest the emulator for casual players who would be more reluctant to adjust the setting).
Heh, I'm the exact opposite! As a developer, I like to see the entire picture by default, and I hate that FCEUX crops 8 scanlines from the top and bottom by default!
lidnariq wrote:
tokumaru wrote:
I was under the impression that all scanlines were visible in PAL.
PAL (both 2C07 and UA6538) displays 252x239 pixels. Left and right two pixels, and top scanline (where sprites couldn't appear) are all black instead.
Sour wrote:
I wish I had a simple solution here - would be convenient if the header could be used to specify stuff like this (if only for homebrew's sake), but I imagine a lot of people would be opposed to changing this.
Would a usable compromise be to display 224 scanlines normally, but all 240 when any of the various Debug windows are open?
I had originally posted in that thread, but am moving it here:
Quote:
I understand the desire to see all for some reasons while developing, but there's also a time in development when you need to start seeing what the player will see.
Another downside with displaying everything by default is that it will further confuse new developers about screen-safe area, a problem which already plagues developers even with NTSC area cropped. See the development histories of Super Tilt Bros. and The Mad Wizard for games which had to correct layout to compensate for screen-safe areas.
I actually think having a semi-opaque, check-screen-safe-area overlay available for developers to preview with would be very useful, that not only shows what for sure is and isn't displayed on PAL/NTSC, but also has the crosshairs (from the Nintendo internal screen-planning sheets) or something derived from Tepples' screensafe chart to indicate what *might* be hidden on an NTSC. Having a clearly labeled menu in the dropdown would attract new devs to try it and see what's happening.
Anyway, setting default to 8 or not, tied to region-setting wouldn't be a bad idea. It just seems that if neither PAL nor NTSC screens render that top row, there's no benefit in displaying it, especially by default. It really does seem like displaying area outside of what the hardware displays is a developer's feature, and developers are going to be more willing to dig through settings to see what they want/need, but when I'm trying to suggest the emulator to new or casual players, (especially those that might be willing to try the game, but not deep into emulation), it's not likely they're going to to be willing to or even know to change settings to see what the devs intended.
I like lidnariq's suggestion to have it as a debug feature. Maybe even a shortcut key to toggle between full area display and whatever the user has overscan set to, to quickly check when desired.
M_Tee wrote:
I actually think having a semi-opaque, check-screen-safe-area overlay available for developers to preview with would be very useful, that not only shows what for sure is and isn't displayed on PAL/NTSC, but also has the crosshairs (from the Nintendo internal screen-planning sheets) or something derived from Tepples' screensafe chart to indicate what *might* be hidden on an NTSC. Having a clearly labeled menu in the dropdown would attract new devs to try it and see what's happening.
This is already available as a built-in Lua script (based on tepples' image on the wiki), but it's definitely not "easy" to discover - I wouldn't be surprised if I'm the only one who knows about it.
Maybe a shortcut key to switch between "overscan on", "overscan off" and "show safe regions overlay" would be useful for devs?
Forcing the whole screen to be shown automatically when debug tools are opened isn't really ideal (a user might have a reason to want to use the debug tools with the overscan active, etc.).
That being said, both Nestopia UE and FCEUX default to cutting the top and bottom 8 rows (though puNES shows everything by default), so maybe I should just revert to using that as a default as well - it's easier for 50 developers to set it to 0 than it is for hundreds of users to set it to a sane default.
rainwarrior wrote:
[history viewer feedback]
I've fixed it so all movies will have a save state included when using randomized RAM.
Also added:
-The ability to export a portion of a segment (by specifying the start/end in hh:mm:ss)-
-A fairly basic "scale" option to resize the screen
-The ability to export a save state for the current position (e.g to avoid having to resume gameplay and then saving the state from the main window)
Still leaves a few things (which might have to wait until after the next proper release):
-Showing current/total frames while playing a movie (on the main window, or in the history viewer)
-Moving back and forth in the history viewer 1 frame at a time
-Loading up an existing movie into the history viewer to be able to browse through it without watching the whole thing
Sour wrote:
This is already available as a built-in Lua script (based on tepples' image on the wiki), but it's definitely not "easy" to discover - I wouldn't be surprised if I'm the only one who knows about it.
Maybe a shortcut key to switch between "overscan on", "overscan off" and "show safe regions overlay" would be useful for devs?
Forcing the whole screen to be shown automatically when debug tools are opened isn't really ideal (a user might have a reason to want to use the debug tools with the overscan active, etc.).
That being said, both Nestopia UE and FCEUX default to cutting the top and bottom 8 rows (though puNES shows everything by default), so maybe I should just revert to using that as a default as well - it's easier for 50 developers to set it to 0 than it is for hundreds of users to set it to a sane default.
That sounds really good.
I'm eager to dig through Mesen and see what more it has to offer. You've chosen a good default palette and included solid alternatives. It's also great how the full palette is previewed when being changed, and I'm loving other features such as suggesting to continue previous game session when opening.
One minor issue I noticed tonight is that stepping backward out of a function causes the next step backward to mark the instruction as the start of a function. There's also no way to remove this mark that I can find.
Thanks a ton for the PreviousOpPC option for conditions; that's been a big help for me already. Along these lines, I've also got a request for something similar for memory and maybe registers that I think could be useful. When reverse engineering, I sometimes find myself wondering if a load or store ever winds up mattering (perhaps to know whether it's unused and can be removed, to save bytes or cycles). For example, in Zelda, the collision detection code loads a value into Y that I currently suspect is never used outside the function, and stores horizontal and vertical position deltas into a couple temporaries that I didn't think were used. Auditing all loads from those temporaries throughout the ROM leads me to believe that the horizontal one isn't used, but I did find a vertical one that took quite some time to confirm from reverse engineering. I'd find it very useful if I could set a condition on a read breakpoint restricting it based on whether the variable was last written by a particular OpPC, to help me more quickly determine if something is used. Something similar for registers would be nice, though I'm less sure what it would look like since registers can't be breakpoint targets. This would be more useful still if it kept a list of influencing writes (one instruction stored and then another shifted, so the value depends on both of them, but another store later clears the list because the previous writes don't matter anymore), but I suspect that could have performance concerns.
(This is kind of along the lines of the advanced code/data logging I'd really like to have at some point that could understand the history of values for being able to correctly mark moved bytes as code/data based on how they get used later, or understand table boundaries, etc. Having a fairly comprehensive log would make it very easy to quickly answer these sorts of questions without even using the debugger. The debugger condition is smaller scoped, but because it'd be running all the time rather than just when logging, I worry about the performance impact of tracking more than just the last write.)
Fiskbit wrote:
One minor issue I noticed tonight is that stepping backward out of a function causes the next step backward to mark the instruction as the start of a function. There's also no way to remove this mark that I can find
Thanks, this should be fixed. The function start data is stored in the CDL file, so you can clear them by resetting the CDL log (from the Tools->Code/Data Logger menu)
RE: more advanced tracking of reads/writes, I kind of feel like this would be a better match for Lua scripting than it would be for adding directly to the debugger itself. A script that monitors reads/writes and keeps tracks of whether or not the values are used should be pretty trivial to add, whereas adding this directly to the debugger would have at least some sort of performance impact (and would probably not even be as flexible as a Lua script, either).
Hello, I have a little feature request for Mesen.
It is known that the NES has three possible CPU/PPU synchronization phases every power on, and I'd like to see an option to select which synchronization phase you get.
I'm asking because of Dragon Warrior 4. This game has a very tricky random number generator that is sensitive to the exact number of times it can execute a INC loop at the end of every frame. Even losing a single PPU dot worth of timing can throw off all the numbers drastically.
For background, the game runs this at the end of every frame: (14 CPU cycles per iteration)
Code:
FF77:
NOP
NOP
INC $12
CMP $050C
BEQ $FF77
This is also not the only operation involving memory $12, it is also used as part of a linear feedback shift register RNG.
Here is a test procedure you can use to test out Dragon Warrior 4's RNG:
* Delete all saved games
* Create a saved game in slot 1. Name: A, Male hero, Message speed Fast
* Save at the first House of Healing in the castle town.
* Power off.
* Power on and Hold Down+A from the copyright screen and file selection screen
* After "Welcome back, Ragnar!" appears, but before the second line of dialog appears, release A and continue to hold Down
* Continue to hold down, and note how the blue guy near you moves around.
Dragon Warrior 4's RNG is so tricky that every emulator seems to disagree on exactly what RNG values you will get. I have tested out many emulators, and they all gave different RNG output. The only match was between Mesen and Nintendulator, and that does not necessarily indicate that both are "correct" and the others are "wrong".
I was on the vague impression that NTSC had 4 different power up alignments? (e.g:
this thread).
That being said, there's a way to tweak (code-wise) the number of PPU cycles that run during the CPU's boot-up sequence which has impact on a number of things, maybe that would be sufficient to trigger the different RNG results in DQ4?
Otherwise, emulating the actual alignments requires emulating down to the master clock level, I think? That would probably cause a noticeable hit on performance, I'd imagine..
Dwedit wrote:
Hello, I have a little feature request for Mesen.
It is known that the NES has three possible CPU/PPU synchronization phases every power on, and I'd like to see an option to select which synchronization phase you get.
To me, this sounds like you're asking for essentially a way to cheat -- or rather, "make consistent -- the RNG algorithm, probably related to
things like this. Am I wrong?
Since there are 3 dots per CPU cycle, I think this is specifically about the alignment of whole dots and whole cycles (which dot a CPU cycle begins on), rather than which of 4 positions within a dot a CPU cycle begins on.
@Sour: I've been encountering some crashing on the newest couple builds of Mesen (0.9.7.92/93). In my current ROM, if I have the following two breakpoints enabled simultaneously, the emulator closes without any sort of error message.
CPU:-W- $0010
CPU:-W- $8000-FFFF where OpPC < $FFAC || OpPC > $FFBF
Another issue I found is that the add breakpoint option when right clicking on a tile in the Nametable Viewer tries to add a wrong kind of breakpoint (Palette RAM breakpoint type on address $1F).
Also, any chance you could add a breakpoint option to that Nametable Viewer right click menu for the attribute byte?
Finally, is there any way right now to view mapper register state? From past experience, I am guessing the issue I'm currently debugging is an errant write above $8000, which causes me to load the wrong bank later (MMC1), but being able to actually see around swap time that the mapper state isn't right would definitely help track this sort of thing down. Given that there are so many different mappers, though, I'd not be surprised if there were no way to get at the info.
Regarding LUA scripting for better logging, that hadn't crossed my mind and is a great idea. I'll look into that when I manage to free up some time from other projects!
Fiskbit wrote:
Since there are 3 dots per CPU cycle, I think this is specifically about the alignment of whole dots and whole cycles (which dot a CPU cycle begins on), rather than which of 4 positions within a dot a CPU cycle begins on.
Last night I was digging around old posts and found some tests done by ulfalizer and blargg about this-
viewtopic.php?f=3&t=10029https://wiki.nesdev.com/w/index.php/Use ... ing_charts
Thanks! The breakpoint crash should be fixed. As far as I can tell, it looks like it's been this way for over half a year or so, so I'm surprised it's never been reported before (although I guess it's hard to report since your breakpoints are lost by the crash). It occurred when mixing breakpoints with conditions and breakpoints without conditions for the same type of breakpoint (e.g write breakpoints in this case)
The add breakpoint option for the nametable viewer now correctly sets the address value in the popup and I also added an option for adding a breakpoint on the attribute byte (along with a matching shortcut configuration).
There's no way to see mapper-related config at the moment (although from Lua you can check the current memory mappings, e.g what's displayed at the bottom of the debugger window). It might be feasible to add mapper-specific information, e.g for the most common mappers, but there's no (easy) way to do this for all mappers (if only C++ supported reflection...).
A good way to track down something like this might be by using the trace logger's conditional logging option to log only writes to the registers that you suspect is causing the issue (e.g "IsWrite && Address == $8000"), that should make it fairly obvious where/when the write occurs.
Sour, great to see and benefit greatly from your continued success with Mesen!
There is this one miniscule visual Mesen change you may make. My suggestion ('*' represents dark pixels, ' ' represents transparent pixels):
The second controller in the debugger's
Input Button Setup section is labeled
Code:
***
*
*
*
***
^that looks like a Z to me; how about:
Code:
**
*
*
*
***
?
Sour, in Mesen's debugger there is a checkmark box next to NMI on vBlank in the "Control & Mask" section. If that box is checked does that cause NMIs to occur regardless of the value of the Interrupt Disable flag? While debugging, and while that flag was set, my vblank function was processed all of a sudden. The screen is drawn incorrectly and think this vblank running while it isn't supposed to run is the cause. (I remember checking that box after seeing that it was checked on your mesen.ca site. And I couldn't find a page describing the "Control & Mask" section.)
NM, YOU HAVE ALREADY FIXED THIS BUG REPORTED BELOW; THANK YOU! I WASTED MORE SPACE IN YOUR THREAD Sour, there's a weird bug in Mesen's
information overlay within its Nametable Viewer inside the PPU Viewer. If you set the PPU Viewer to "Normal View" and move your mouse down over the top of Nametable 2, and then move it up to Location (x, 29) of Nametable 0, and then move back to Nametable 2's Location (x, 0)... the Location numbers are
always correct in the right side section that is viewable during "Normal View"; but, within
information overlay, those Location y values are
sometimes really incorrect (i.e. Nametable 2's Location (x, 0) may be shown unchanged (x, 29)) or, when moving back and forth between the lowest, relative to the screen, two Nametable 0 tiles or the highest two Nametable 2 tiles,
those tiles' y values are swapped (like tile (x,0) of Nametable 2 is shown as tile (x,1)).
Hope that was written clearly.
Note: I'm using Mesen Version: DevWin-0.9.7.64; it was released on 1/20/2019 and after reviewing relevant posts in this thread it doesn't seem like this has been addressed. I'm in the middle of a debug and so I don't want to try this with the newest Mesen right now.
EDIT.
unregistered wrote:
If that box is checked does that cause NMIs to occur regardless of the value of the Interrupt Disable flag?
That box (along with every other box in that section) is one of the PPU's flag set through one of the registers. When enabled, the PPU will trigger an NMI on scanline 241. NMIs are always processed by the CPU, regardless of the I flag.
RE: The controller number, I'll take a look when I have some time, if it looks better/more readable, I'll change it.
Sour wrote:
unregistered wrote:
If that box is checked does that cause NMIs to occur regardless of the value of the Interrupt Disable flag?
That box (along with every other box in that section) is one of the PPU's flag set through one of the registers. When enabled, the PPU will trigger an NMI on scanline 241. NMIs are always processed by the CPU, regardless of the I flag.
Thank you! So the I flag disables NMIs on scanline 241... but everywhere else NMIs can occur? It is called the Interrupt disable flag, at least in the MOS Technology manual's appendices, for some reason, I bet.
Would unchecking that box restore the I flags usefulness? I will visit nesdev's wiki.
edit: ooh, it's set with one of the registers... END EDIT
edit2:
ahhh <-wiki | So I need to clear bit7 of $2000. Thank you Sour for your generous help!
... END EDIT2
Sour wrote:
RE: The controller number, I'll take a look when I have some time, if it looks better/more readable, I'll change it.
Cool!
That's just an idea... will be ok if you don't change it.
To clarify what Sour was saying (because your response is a bit confusing):
The 6502's
i flag (controlled with instructions like
sei/cli or
plp)
does not affect NMI. NMI stands for "non-maskable interrupt", which in this case means it's an externally-generated hardware interrupt. The
i bit can only be used to inhibit what's coming on the IRQ line/wire. (You should be familiar with this from the fact that the 6502 provides an NMI vector as well as an IRQ/BRK vector. If you want to know more about the later, just ask.)
Don't confuse IRQ and NMI. They're separate things (physically separate pins on the 6502 CPU, thus physically separate traces/wires). The confusion lies in the fact that both are types of interrupts. :-)
Like many video games systems, the NES ties/wires PPU VBlank to NMI, and provides an MMIO register to control that behaviour:
bit 7 of $2000. It sounds like what Mesen is showing you in the info box is just whether or not bit 7 of $2000 is set or not.
Mesen generates an NMI when the emulator reaches/hits scanline 241 (which is when VBlank starts on the NES; see
PPU rendering).
If you're programming on the NES or trying to figure it out: be aware that if you write to $2000, you are also affecting some very critical internal PPU addressing bits (particularly through bits 1 and 0 of $2000), which can affect scrolling, despite not being immediately obvious. That brings into focus
this document which will cause you to pull your hair out in confusion. :)
(Related-yet-not: the one thing I've seen several emulators offer, which I've never understood the purpose of, is what Mesen has under Debug -> PPU Viewer ->
When emulation is running, show PPU data at scanline X and cycle Y where you can select X and Y yourself, defaulting to 241 and 0. I don't know what "show PPU data" means in this context. FCEUX has something under Debug -> PPU Viewer called
Display on scanline: X where X defaults to 0. It also has the same thing under Name Table Viewer.)
koitsu wrote:
(Related-yet-not: the one thing I've seen several emulators offer, which I've never understood the purpose of, is what Mesen has under Debug -> PPU Viewer -> When emulation is running, show PPU data at scanline X and cycle Y where you can select X and Y yourself, defaulting to 241 and 0. I don't know what "show PPU data" means in this context.
This specifies when the PPU viewer is updated when the emulator is running - so by default it shows you the state of the nametables at cycle 0, scanline 241. For SMB3, for example, this shows you the status bar correctly, but the screen is garbage. If you set it to scanline ~180, the screen will be visible, but the status bar will be garbage. The cycle setting is less useful, but can be used in stuff like punch out which switches the CHR banks once or twice per scanline (iirc?)
koitsu wrote:
To clarify what Sour was saying (because your response is a bit confusing):
The 6502's
i flag (controlled with instructions like
sei/cli or
plp)
does not affect NMI. NMI stands for "non-maskable interrupt", which in this case means it's an externally-generated hardware interrupt. The
i bit can only be used to inhibit what's coming on the IRQ line/wire. (You should be familiar with this from the fact that the 6502 provides an NMI vector as well as an IRQ/BRK vector. If you want to know more about the later, just ask.)
Don't confuse IRQ and NMI. They're separate things (physically separate pins on the 6502 CPU, thus physically separate traces/wires). The confusion lies in the fact that both are types of interrupts.
Thank you. Yes, I was confused but reading the wiki page section that I linked to cleared the two types of interrupts confusion that I was experiencing.
koitsu wrote:
Like many video games systems, the NES ties/wires PPU VBlank to NMI, and provides an MMIO register to control that behaviour:
bit 7 of $2000. It sounds like what Mesen is showing you in the info box is just whether or not bit 7 of $2000 is set or not.
I meant to say that what Mesen is showing me in the info box just really made sense to me after I reread Sour's reply, by my first edit:
unregistered wrote:
edit: ooh, it's set with one of the registers... END EDIT
I need to work on my clarity.
koitsu wrote:
Mesen generates an NMI when the emulator reaches/hits scanline 241 (which is when VBlank starts on the NES; see
PPU rendering).
If you're programming on the NES or trying to figure it out: be aware that if you write to $2000, you are also affecting some very critical internal PPU addressing bits (particularly through bits 1 and 0 of $2000), which can affect scrolling, despite not being immediately obvious. That brings into focus
this document which will cause you to pull your hair out in confusion.
Yes, the wiki was written differently, or at least that page looks different now, and I spent a very long time getting scrolling to work in our game. But, that process was possible for me partly because of tepples' crucial visible_left/valid_left post at the very bottom of page 69 of my lengthy thread. And partly because of tokumaru's you-should-have-a-camera response-motif. Kasumi helped so much too!
Our game only scrolls horrizontally so it wasn't insane like most of the page you linked to describes.
I hadn't worked on disabling NMIs in a long time and so I was really confused. But, Sour's and your response, Koitsu, have eased me back into understanding what needs to be done. Thank you both so much!
Sour wrote:
This specifies when the PPU viewer is updated when the emulator is running - so by default it shows you the state of the nametables at cycle 0, scanline 241. For SMB3, for example, this shows you the status bar correctly, but the screen is garbage. If you set it to scanline ~180, the screen will be visible, but the status bar will be garbage. The cycle setting is less useful, but can be used in stuff like punch out which switches the CHR banks once or twice per scanline (iirc?)
Ah ha! Makes sense. I knew on a general level "what" the options were referring to, but "show PPU data at scanline" made me think "...what PPU data? What is it showing at a scanline or cycle point? A snake?" LOL :D Thank you!
Instead of "show PPU data at" you could say "sample PPU state at" or something, but I'm not sure that's any clearer.
Sour, after some quick forum searching I think this hasn't been discussed... on my Mesen DevWin-0.9.7.97 debugging screen there are sometimes many "sub start" lines in the middle of some of the functions. (I think the purpose of "sub start" is to help me to know where unlabeled functions start.) And it seems to me this problem has occurred bc of my moving functions around/adding or subtracting bytes/inserting new code/rewriting parts of old code... however, I am continuing to fail to find a way to reset the "sub start" labels. Can you please teach me how to clean the pointless "sub start"s away?
The information about function start points is stored in the CDL file that it saves in the Debugger subfolder.
You can reset it in Tools->Code/Data Logger or by deleting the CDL file (note that this will make the disassembly lose track of which parts of the ROM is code vs data (until those parts of the rom are run again), but that's probably a good thing if you've changed the code a lot.
I didn't use the recent build since I use the current version daily on linux but is the latest build version usable in that environment? When should we expect 0.98?
The latest dev builds should be pretty stable - I've been fixing bugs as they are reported for the past month or 2 (and haven't been doing much else). It should be pretty safe to use it - though save states compatibility was broken at some point, so 0.9.7 states can't be loaded anymore (and I may have to do this one more time before releasing 0.9.8).
I meant to release 0.9.8 over a month ago, but been haven't had the time to tie up the last few loose ends and make a proper release.
Sour wrote:
The information about function start points is stored in the CDL file that it saves in the Debugger subfolder.
You can reset it in Tools->Code/Data Logger or by deleting the CDL file (note that this will make the disassembly lose track of which parts of the ROM is code vs data (until those parts of the rom are run again), but that's probably a good thing if you've changed the code a lot.
Sour, thank you so much!
Note for others: the Code/Data Logger is only in the Debugger's Tools menu. Resetting its log removed all of the old "sub start"s! Yeay!
I have a question regarding the latest build. It is possible that compared to the official one that full speed (F9) is slower? It feels that way on linux, not sure why. It seems to skip less frame at full speed. odd.
"Official" releases are optimized by PGO and are typically 15-30% faster than the appveyor dev builds, so yes, that's probably normal in this case.
Hi Sour,
Thank you for your hard work on Mesen ! I running Chinese Win10 64bits OS. Mesen default Chinese language. Can I change it to English language ? I can't find it on Mesen's setting. Thanks.
@Sour
I see. Then I will only use the debug version when I have bug I cannot work around then. Thanks for confirming!
@Tomy
I will answer for Sour since I had to do a few time. The answer is yes, you just go in the 3rd menu (option in English), then from this menu you select the last one, which is "preference". In the first tab (general), the first option wichi is a dropdown should be called display language and the default option is to "User account default", which means uses the OS language.
If you change to English from that dropdown, it will be always in English from that point. It should be the first choice in the list, unless the order change when displayed in Chinese. Hope it helps!
@Banshaku
Oh, yes, foolish me. I found that option now. Thank you.
When running the fill mode test of
this MMC5 test rom Mesen shows white and red numbers.
Fill mode should show a single color for all tiles right? It looks like Mesen writes the fill mode color as 8 bit value (03) in the attribute table instead of filling the byte with the value 4 times (FF).
And you're absolutely correct! Wonder how I managed to miss that considering I've run that test a bunch of times before.
It's fixed as of the latest commit - thanks for the report!
Seems like the Famicom keyboard keys "[" and "]" are swapped in the input dialogue. They are labelled correctly but mapped wrong (just try typing "[ ]" in Family BASIC V3 for example). Other keys appears to be alright.
Also the keyboard keys on the host side are lacking things like separate left and right Ctrl and Shift, and the AltGr key works like a Ctrl+Alt combination for example (Return and Enter are separate though). This is especially annoying when using the Famicom keyboard. I guess this is a problem with the input library though.
Thanks, the inverted brackets should be fixed.
As far as I can tell left/right keys being treated as the same key was done on purpose when I implemented customizable shortcuts.
e.g if you bind "open file" to "Ctrl+O", you most likely expect it to work regardless of which ctrl key you press (this is more or less the case in pretty much all Windows applications)
There are a number of scenarios (e.g this) where it might make sense to distinguish between both of them, but I can't really see a simple way of implementing both of these requirements in a way that's obvious and user friendly (at best it could probably be a global option? but I'm not sure if this is useful enough to warrant an option)
Yeah well I think most people would like the emulated inputs to treat each key as unique while interface inputs like the shortcuts should not, so my first thought would be to handle them separately.
In those cases you want multiple keys to act the same for emulated inputs, like if you want both Ctrl keys to work as the Famibe keyboard's CTR or to be able to use numpad numbers the same as the number row, there are already 4 key-sets that can be used for alternate keys.
Alternatively one could add alternate key-sets for the shortcuts as well. If the input dialogue says "Left Ctrl" people will not be surprised that the right Ctrl does not work as well.
Pokun wrote:
Yeah well I think most people would like the emulated inputs to treat each key as unique while interface inputs like the shortcuts should not, so my first thought would be to handle them separately.
I guess that's one way to separate them, but I'd have to check to see how easily it can be done, since all the input logic is shared between both types of keybindings (and there's also some Windows and Linux specific code involved here). I'll add to the list of things to check/fix.
hi Sour,
In your Mesen Nametable Viewer, what does a persistant red latticework over an entire screen (actually, over two "screens" represented by nametables 0 and 2 bc of vertical mirroring) mean? The screens are being drawn correctly underneath that red latticework, but it never goes away until we stop drawing screens; obviously, this results in nothing being displayed on the emulated game screen until the red latticework goes away.
Could the reason be that screens have to stay an entire frame or so before being shown?
Thanks for reading
You probably have the "Highlight tile updates" option enabled at the bottom of the nametable viewer?
Thank you Sour;
you are correct. The other screens show when that red highlight goes away... so that tells me that a screen has to stay for an entire frame for it to display. This is just my guess.
In a recent feature request and a recent commit, breaking save states compatibility is mentioned. So is it possible to split the save states into 2 parts? One part contains the emulated game state and the other part contains the emulator state. The emulated game state format should be standardized for the same mapper and so future changes to the emulator shouldn't break the compatibility of this part. The emulator state can change when required but it doesn't affect the gameplay so it shouldn't be a huge problem to skip this part and just initialize those values. This way old save states remain compatible.
I'm not too sure what distinction you're making between the "emulation" and "emulator" state? From my point of view, 99% of the content of save states is vital "emulation" state.
I usually try to keep save states compatible from one version to another, but between 0.9.7 and now I've reworked some of the internals to remove some limitations and simplify some code which forced me to break save state compatibility. The recent commit that broke it is just me breaking compatibility for simplicity since I've already done so since 0.9.7 (e.g from the point of view of someone using the official release builds, compatibility will only break once).
While it's certainly possible to create a more robust system for save states, the one I'm using now is simple to use/implement in terms of code, which I kind of prefer over having to come up with a more complex solution that would probably involve more boilerplate everywhere in the code.
Sour, here are 2 notes.
1.) After successfully hex editing my cdl file, here is what is now evident, to me, for each 8bit hex value (when it's set)...
- bit 7: function starts (clearing this bit is a way to remove old sub start)
- bit 4: helps Mesen to know the start of an instruction
- bit 1: this byte is a member of a data section
- bit 0: written when code has been visited
And so now there are helpful huge data sections, but when mouse over a byte it pulls up a screen that talks about its zeropage value. Would pulling up nothing when hovering over data section bytes be better? It even pulls up the zeropage value of a data section byte when the real byte has a label.
2.) After clicking: File>(Workspace)>"Export Labels" and saving a mlb file, and then opening that file in a unix text editor, here is another idea: would it be terrible to add a '|' after an address? That would allow me to add another address before the ':' so that naming more than one different spots of memory with the same name would be possible.
tokumaru taught me to use, in asm6,
Code:
test=$
at the same address in multiple banks to specify the same label "test" in multiple banks. That is super helpful for me!
Obviously, this doesn't port over to Mesen. But, maybe could you implement that '|'? Maybe when loading the mlb file you could store the other memory address in memory, when reaching '|', and then import those addresses into Mesen when reaching that address. (It seems like the mlb file is set up sequentially bc it's easier to load values in order.)
---
Thank you Sour for reading these two small notes.
The CDL flags are defined here:
https://github.com/SourMesen/Mesen/blob ... ogger.h#L9Bits $04, $08 and $20 are unused currently (in Mesen, FCEUX uses some of the bits differently).
unregistered wrote:
when mouse over a byte it pulls up a screen that talks about its zeropage value.
That's a side effect of any "$...." syntax in the disassembly automatically displaying a popup for the matching address. I suppose I could disable it for data lines, though (never really display the data blocks in the disassembly view much, so I hadn't realized this was happening)
Labels in Mesen need to be globally unique, since there is no concept of label scope. So no, it's not possible to redefine the same label at multiple different addresses. In the CC65 integration, it automatically appends a digit when it finds duplicate labels in the source. As far as the asm6f label export goes, though, duplicate labels just overwrite one another, I think (the export code in asm6f would need to be improved)
Sour wrote:
I'm not too sure what distinction you're making between the "emulation" and "emulator" state? From my point of view, 99% of the content of save states is vital "emulation" state.
For example, the real NES hardware does not keep track of how many CPU cycles have passed since running the game so the CPU cycle counter shouldn't be vital to playing the game and should work fine by just initialising it to 0 instead of reading from a save state.
mkwong98 wrote:
Sour wrote:
I'm not too sure what distinction you're making between the "emulation" and "emulator" state? From my point of view, 99% of the content of save states is vital "emulation" state.
For example, the real NES hardware does not keep track of how many CPU cycles have passed since running the game so the CPU cycle counter shouldn't be vital to playing the game and should work fine by just initialising it to 0 instead of reading from a save state.
If mappers can access such a counter, however, they might do so rather than maintain their own, and malfunction if the counter doesn't behave as expected.
The NES doesn't keep track of the total number of cycles, no, but emulating it properly requires knowing if the current cycle is "odd" or "even" for DMA/etc, so keeping track of the cycle count solves that requirement.
Also, like supercat said, some mappers (and various other things) rely on the cycle counter for timing, so removing it from the state would require changing the logic for those (e.g they would need to have their own countdown timers that get decremented every CPU cycle, etc.).
Sour wrote:
Wow!
edit: link doesn't seem to work for me, at least not on this computer. END EDIT.
final edit: your link definitly works on a much better cpu.
END EDIT.
Sour wrote:
Labels in Mesen need to be globally unique, since there is no concept of label scope. So no, it's not possible to redefine the same label at multiple different addresses. In the CC65 integration, it automatically appends a digit when it finds duplicate labels in the source. As far as the asm6f label export goes, though, duplicate labels just overwrite one another, I think (the export code in asm6f would need to be improved)
Yes, in asm6, duplicate labels just overwrite one another... that's why duplicate labels must be at the same address for everything to work correctly. But, that is extremely helpful when used creatively. In our game a certain bytes, i.e. abyte = $, in each lower bank contain flags. Then a simple
bit abyte paired with an appropriate branch can instantaneously tell what type of bank is selected without affecting the registers!
It's so cool! Therefore, I hope "the export code in asm6f would need to be improved" does not eleminate the usefulness of duplicate labels.
Sour wrote:
Labels in Mesen need to be globally unique, since there is no concept of label scope. So no, it's not possible to redefine the same label at multiple different addresses.
After some thinking: Mesen uses two types of addresses,
Byte Code and
PRG Address. The nes game ROM treats duplicate labels as Byte Code. Mesen must somehow translate that Byte Code into the appropriate PRG Address. So, could you allow a flag character in the mlb files, say '*', that when found it interprets the following address as Byte Code? i.e. ...nm, it seems you already have something like that...
Does the R stand for Real and the P stand for PRG Address? Would allowing
R:9000:abyte be possible? Or maybe your code that handles R is much simpler than your P code, if so and you need to use P code, would
P:*9000:abyte be possible? (The '*' would work as suggested above.) Just some thoughts.
note: abyte is a fake variable and so is its 9000 address
... or maybe don't use a '*'; replace that with an 'F'... just another idea. I don't know if the PRG Address can get that high. Just trying to help... I don't know if my ideas are even possible, I should be quiet now.
I'm not entirely sure I'm following what you're saying.
Like I said though, Mesen has no concept of label scope - it can't redefine the same label multiple times because of this.
If multiple identical labels existed, the debugger would have no idea which one was being referred to by a given line of code, which makes it harder/impossible to display a corresponding tooltip, or to navigate to the label, or to open the label in the hex editor, etc.
When I mentioned improving asm6f, I strictly meant the .mlb file export feature it contains, not the actual assembler.
"R" in the mlb label files refers to the 2kb of internal RAM that the NES has (e.g $0000 to $1FFF in CPU memory)
Sour wrote:
The NES doesn't keep track of the total number of cycles, no, but emulating it properly requires knowing if the current cycle is "odd" or "even" for DMA/etc, so keeping track of the cycle count solves that requirement.
Also, like supercat said, some mappers (and various other things) rely on the cycle counter for timing, so removing it from the state would require changing the logic for those (e.g they would need to have their own countdown timers that get decremented every CPU cycle, etc.).
In this case, a flag indicating "odd" or "even" and a value for timers in mappers belongs to the emulated game state and the CPU cycle counter is emulator specific implementation of that state. What I'm suggesting is
a save state format which is emulator independent and hence always compatible. I see no reason why it is impossible but I guess it is too complicated to actually do.
Discuss a cross-emulator save state format
here (if you dare).
Sour wrote:
I'm not entirely sure I'm following what you're saying.
Like I said though, Mesen has no concept of label scope - it can't redefine the same label multiple times because of this.
If multiple identical labels existed, the debugger would have no idea which one was being referred to by a given line of code, which makes it harder/impossible to display a corresponding tooltip, or to navigate to the label, or to open the label in the hex editor, etc.
I guess I don't understand how the PRG Addresses work. Does the assembler asm6 translate each of my identical labels in each our lower banks into PRG Addresses listed in the ROM (the .nes file)? Or does Mesen create the PRG Addresses so that the mlb file can work?
Regardless, I will try again. Somehow you must have access to the ROM addresses to emulate a NES. I thought it might be possible to specify a ROM address in your mlb file (i.e. $9000). That would be just one address. But, you'd have to get your assembler to treat those addresses with the same method you use to translate my identical labels into the PRG Addresses. For example, in each of the lower banks ($8000-$BFFF) in our MMC1 game (note: the 2 higher banks ($C000-$FFFF) can be either bank 15 or bank 31) there are certain bytes with identical labels, let's pretend one of those labels is named
abyte and is always labeling address $9000. So my idea was to somehow specify in your mlb file that abyte points to address $9000.
Like:
- P:*9000:abyte or
- I:9000:abyte| note: I for identical
Then when reaching a '*' or the 'I' you would have to use that address specified in the same way you interpret each of the
abytes that, we are pretending, are sitting in the same ROM address spot in each of the lower banks in our game.
Somehow when switching banks the PRG Address for
abyte changes to the appropriate value... so you already have that code written. Just was wondering if it was possible to reuse that code so that when trying to label $9000, you might be able to translate that $9000 into the appropriate PRG Address depending on what bank is selected in the emulated game?
Sour wrote:
When I mentioned improving asm6f, I strictly meant the .mlb file export feature it contains, not the actual assembler.
"R" in the mlb label files refers to the 2kb of internal RAM that the NES has (e.g $0000 to $1FFF in CPU memory)
Oh ok!
Great choice of "R".
The "PRG ROM" labels are not based on the address of bytes in the CPU's memory (e.g $0000-$FFFF), but rather based on their offset in the .nes file (excluding the header). i.e it's an offset in the entire PRG ROM.
So while your first "abyte" might be at offset $1000, the 2nd one could be at $23000, for example (even though at runtime they are both mapped to $9000 in the CPU's address space, in the actual .nes rom, they refer to different bytes). Mesen treats these as completely independent labels. If it was possible to give 2 different bytes in the PRG ROM the same label, some functionality would no longer work properly (e.g right-click, edit in memory viewer, etc.)
Ok, remember that I'm around understanding level 0.7 and you must be at understanding level 10. Could you make
- O:9000:abyte| note: O refers to ROM
so that
abyte is a label to whatever byte, in the .nes rom, is mapped to $9000 in the CPU's address space?
Maybe that is a bad question, but am just wondering. I'm done asking; thank you Sour for responding each time.
p.s. Then, when right-click or edit in memory viewer you could have Mesen zero-in on whatever byte, in the .nes rom, is mapped to the specified part (byte) in the CPU's address space.
Sour wrote:
The "PRG ROM" labels are not based on the address of bytes in the CPU's memory (e.g $0000-$FFFF)
Sigh, sorry Sour, this opening statement of yours must have escaped me.
Regardless, my last question would have made so much more sense if it would have used:
- C:9000:abyte | note: C represents CPU address space
But, you just said the "PRG ROM" labels are not based on the CPU's address space... so you would have to make a new "CPU address space" label. That seems like a lot of work so it's probably not worth it.
It's not a crucial detail... just thought it would be cool.
tldr: nm
You can try using the "register" ("G" in mlb files for 'global') type labels for your scenario, maybe it'll work as you expect it to.
Essentially, they are CPU memory labels ($0000-$FFFF) that ignore all banking, etc. This is the type of label used to define the PPU/APU registers. The rules for them to be used in the disassembly window are slightly different, but might work for your case.
Thank you so much Sour, that works!
Even after running the latest MesenDevWin, this is still a problem that I can't understand.
Here is a screenshot of my asm6 .lst file and Mesen's debugger...
Somehow the last two bytes from my
jsr attributetable have both been pushed forward two bytes and into a new fake instruction. And thus, the following
rts has decended two bytes.
(I'm in the midst of debugging my first try at self-modifying code. It has been going pretty good!
)
asm6_'s .lst file is different. Is this a problem with my asm6_? I don't think so bc the assembler was never changed; we just attempted to change .lst file creation. I'm lost.
^
nm, my
init_PRGRAM function in ROM wasn't correct... now
seei in my PRGRAM is written correctly. Sorry for wasting your space and time.
9 months and nearly 300 commits later,
0.9.8 is finally out.
The online documentation has also been updated.
If you find anything that's broken, please let me know!
Just a heads up, 0.9.8 had an issue causing FDS games to load incorrectly. It's fixed and I've recompiled a new build of 0.9.8, but if you upgraded to 0.9.8 already, you'll need to manually download 0.9.8 from the website or github to get the fixed build. Sorry!
A new build, ye!
I need to go check for the linux build now.
hello. I'm trying to get this working on linux, but I'm new to linux and can't figure out how. there does not seem to be a "download for linux" button??
Instructions are on Mesen's source:
The official releases (same downloads as the Windows builds above) also contain the Linux version of Mesen, built under Ubuntu 16 - you should be able to use that in most cases if you are using Ubuntu.
The Linux version is a standard .NET executable file and requires Mono to run - you may need to configure your environment to allow it to automatically run .exe files through Mono, or manually run Mesen by using mono (e.g: "mono Mesen.exe").
The following packages need to be installed to run Mesen:
- mono-complete
- libsdl2-2.0
- gnome-themes-standard
If you're using something other than Debian or Ubuntu, some details may change.
It'd be nice if the overscan cropping was a function of NTSC (e.g. 224 scanline) vs PAL (whole field)
Also, it'd be nice if there was a more verbose error than "failed to load XX" in case of a too-small file.
I feel like the NTSC/PAL overscan has been asked before, too, but was not on my list of ideas, somehow. It shouldn't be too hard to add, especially considering there's already per-game overscan settings.
There is a more verbose message in the log window:
Code:
[iNes] Invalid file (file length does not match header information) - load operation cancelled.
Although I agree it can be easy to overlook.
Hi Sour, I'm using Nametable viewer and there are problems with 2 games: "Captain Tsubasa Vol. II - Super Striker" and "Dragon Ball Z II - Gekishin Freeza!!". Can that be fixed? Thanks.
Why do you think that's a problem in the Nametable viewer?
You see the same thing in FCEUX's nametable viewer.
Definitely not a bug; game is using some per-scanline/at-runtime PPU trickery that cannot be reflected in the nametable directly.
Like koitsu & lidnariq said, this isn't a bug.
The game probably switches CHR banks in the middle of screen, but the nametable viewer can only display the nametable using whatever CHR banks are currently in use. To see the top portion of the screen correctly, you'll need to change the scanline value at the bottom of the ppu viewer window - this defaults to 241, but try setting it to 40-50 and you should be able to see the top portion (which will cause the bottom part to look incorrect)
koitsu wrote:
Definitely not a bug; game is using some per-scanline/at-runtime PPU trickery that cannot be reflected in the nametable directly.
PPU IRQs, much like the scorebar in SMB3.
Pedantically, the Bandai game in the screen shot is using an M2-timed IRQ.
Sour, something I don't understand about your Tile Updates in Mesen's Nametable Viewer is: why does 8bit row 24 and row 25 never become red? The full screen always changes appropriately, but even when running at [Emulation Speed] 1%, rows 24 and 25 are never highlighted red.
Our game now draws the background horizontally a 32bit tall row during each vblank, while each screen is being drawn. So an 8th of screen per vblank. Maybe this info will somehow help?
oh, instead of highlighting rows 24 and 25, rows 28 and 29 are highlighted. Which should only happen the next frame when, afterward but still in vblank, the screen is colored.
I'm sorry if this becomes another waste of your time and space.
It's a bit hard to know with the information you've given. As far as I can tell, any of the rows can get highlighted on my end.
Do the rows physically change in the PPU viewer but don't show any highlight?
Make sure you haven't enabled the "ignore writes that do not alter data" option and try setting the window to refresh at 60fps instead of the default 30, just in case (but this shouldn't matter if you're already running the emulation at a slow speed)
Another problem when using the keyboard. The Alt key is conveniently used as the Famicom's GRPH key by default, but when pressing Alt an emulator menu option is selected. This makes using the GRPH key inconvenient, especially as it's used as a modifier key and pressing Alt + another key may trigger a menu option in the emulator interface. Is there a way to disable this behaviour of the Alt key?
If you want to test this quickly, run Family BASIC V3, press the KANA key to enter kana mode and then press the "A" key which should normally type a "サ" and a "ザ" if the GRPH key is held.
Finally I have a feature request. Nestopia and some other emulators like OpenMSX has a paste macro for automatically typing characters in the clipboard into the emulated machine. In Nestopia you press F12 (or click the menu option) to "paste" the text into the emulated Famicom. In OpenMSX there is a text box in the emulator interface that characters can be typed or pasted into and then be automatically typed on the emulated MSX with a button click. Both approaches works quite well, although I like the OpenMSX approach more since you can see what text you paste and easily make any edits before you start the typing macro. One small annoyance in Nestopia is that it doesn't seem to recognize katakana characters in the clipboard (or at least I don't know how to get it to work using UTF-8), meaning programs using katakana strings will have empty strings and must be modified manually in Family BASIC (I guess this would require to keep track on the kana mode and use the GRPH key for dakuten and handakuten characters). Also although Family BASIC doesn't have lower case letters, keyboard Famiclone programs and pirate versions of Family BASIC do using the SHIFT key, so since it already supports Subor Famiclones, case sensitivity might also be a consideration.
This is the final feature that forces me to keep using Nestopia for Family BASIC stuff, and it's a great feature I'd like to see on Mesen. It's a great way to quickly test a Family BASIC program typed on a computer without having to type it out again. Although I have Family BASIC and keyboard I more often develop programs on my computer and paste them in Nestopia. My computer have better tools and I can save programs on it without having to use tapes, this speeds up work greatly (once a cool program is done I might type it out on my real hardware though and save it on tape though).
Feature request: display of "cycles since last resume"
This is one thing that I keep going back to FCEUX for. It's very useful for verifying cycle timed loops when there's branches involved. E.g. for sample playback I could breakpoint on $4011 and then resume 100 times in a row watching that counter to make sure it's the expected number of cycles each time.
Mesen seems to have an editable cycles field, which is OK if you want to time something just once, but not very useful for repetition like that, which is usually what I need when I'm writing cycle timed code. A simple way to do this might be an option/checkbox to reset that field to 0 automatically on every resume?
Pokun wrote:
Is there a way to disable this behaviour of the Alt key?
Probably not at the moment - I should be able to disable the behavior whenever "keyboard mode" is enabled though. I'll add it to the list and try to take a look soon.
RE: Pasting, it's been on my list of things to do forever, just never got around to it. In one release I implemented ~15 peripherals and it took a month to get it all done - at that point I was just tired and needed it to end, so paste ability ended up being cut :p I'll try to take a look and see how simple it would be to implement - can't imagine it would be too hard.
rainwarrior wrote:
This is one thing that I keep going back to FCEUX for. It's very useful for verifying cycle timed loops when there's branches involved. E.g. for sample playback I could breakpoint on $4011 and then resume 100 times in a row watching that counter to make sure it's the expected number of cycles each time.
The debugger window does display the number of cycles since the previous break at the bottom right in the status bar - that sounds like what you're asking for, but I might be misunderstanding?
Sour wrote:
The debugger window does display the number of cycles since the previous break at the bottom right in the status bar - that sounds like what you're asking for, but I might be misunderstanding?
Ah, that does it! Awesome! No I just never saw it down there.
Sour wrote:
Pokun wrote:
Is there a way to disable this behaviour of the Alt key?
Probably not at the moment - I should be able to disable the behavior whenever "keyboard mode" is enabled though. I'll add it to the list and try to take a look soon.
Thanks, I was worried it couldn't be fixed so easily.
Sour wrote:
RE: Pasting, it's been on my list of things to do forever, just never got around to it. In one release I implemented ~15 peripherals and it took a month to get it all done - at that point I was just tired and needed it to end, so paste ability ended up being cut :p I'll try to take a look and see how simple it would be to implement - can't imagine it would be too hard.
Good to hear that you were planning this already. I'll eagerly be waiting for this feature.
BTW, while I mentioned case sensitivity being a good idea, in the case of Family BASIC (except for the Chinese/Russian hacked versions) it may be a good idea to be able to turn case sensitivity off since it can't type letters at all if SHIFT is pressed.
Sour, thank you for your response, I'm not sure if that was fixed after following your advise. Just checked and currently both the
Ignore writes that do not alter data box is unchecked and the nametable viewer is refreshing now at 60hz. Thank you for your help too.
Another problem, for me, I've found is that after creating "labels" without labels, but just filling out the
Comment: box, after pressing F2, destroys Mesen after a while. This has happened twice. Eventually the .mlb file size increases dramaticly (like from 9kb to something like 32mb) and this causes the debugger to load extremely slow and resets to take extremely long.
The 9kb and 32mb .mlb files are entirely the same, except that each line that appears like:
P:3D788::RLE loop begins\n(sets and uses carry) or
P:3D78F::\neven (notice the two colons beside each other due to missing label) is repeated and repeated and repeated many many many times.
To restore Mesen's speed both times I've had to delete all-1 of each of those :: lines and then load that updated .mlb file. Currently I've created 4 comment only "labels".
Comment only "labels" were never planned? For me they help the debugging process...
p.s. I do not use fasm to create mlb files... rather I just keep editing the mlb and cdl files (with and without Mesen's help) manually. Maybe that causes this problem?
I thought I had fixed the issue with comments endlessly repeating themselves, but it looks like it was only fixed it in Mesen-S and not Mesen. I'll try to commit the fix for it over the weekend.
Sour wrote:
I thought I had fixed the issue with comments endlessly repeating themselves, but it looks like it was only fixed it in Mesen-S and not Mesen. I'll try to commit the fix for it over the weekend.
Thank you so much Sour!
(
pointless note: I'm currently using Mesen 0.9.8)
Hi Sour, I'm looking up HD pack tags in the online documentation using Chrome and some of them are too long to display in one line but they don't warp and there is no horizontal scroll bar. This one for example:
https://www.mesen.ca/docs/hdpacks.html#lt-background-gt-tagCan you fix that when you have time? For the time being, I copy and paste the page content to notepad to read. Thanks.
I have a problem with <background> tag and it would be nice if you can change the behavior of this tag. Currently if the background image contains transparent area, then the emulator shows solid white in that area. Is it possible to change this behavior so that if an area in the background image is transparent, the emulator would render that area as if no <background> tag is defined? ie shows the bg colour and shows the bg priority sprites even through the "Show Behind Background Priority Sprites" parameter is disabled. Thanks.
I have a scene where there are a large section of solid colour and a small section where the bg tiles use transparancy for bg priority sprites to show:
The original screen. The stars are bg priority sprites
Attachment:
Dragon Ball Z II - Gekishin Freeza!! (J) [!]_001.png [ 3.04 KiB | Viewed 9876 times ]
The background image with transparent section:
Attachment:
cutscenebg.png [ 23.11 KiB | Viewed 9876 times ]
The stars no longer shows and that area is filled with solid white:
Attachment:
Dragon Ball Z II - Gekishin Freeza!! (J) [!]_002.png [ 89.07 KiB | Viewed 9876 times ]
I'm looking into the game T&c Surf Designs - Wood and Water Rage, and I noticed there are these odd sprites just before the first level loads.
I looked into it a bit, and I noticed that OAMADDR is not reset when one of the OAM DMA's runs a couple frames before (when the screen turns grey.)
There doesn't seem to be anything special happening, but I haven't gotten a chance to dig too deeply into it.
I think OAMADDR should be reset at this point though.
Maybe a hardware test would help too. (These stray sprites also occur earlier on during loading.
Also this isn't just a visual glitch, this effects sprite zero timing which is keeping me from syncing some TASes between BizHawk and Mesen, so I'd really like to know which one is correct.
Thanks!
Sour wrote:
I thought I had fixed the issue with comments endlessly repeating themselves, but it looks like it was only fixed it in Mesen-S and not Mesen.
If anyone else has experienced the endless repetition of comments, it may help to know that this problem has only occurred for me after I click File>Workspace>"Export Labels".
So, if you remove all of the extra comment lines from your .mlb file after each export, your .mlb file should always be in good shape.
Two requests:
1.
I use a laptop, and I frequently switch from using it at my desk with a second monitor to using it by itself with a single monitor.
The main emulator window always comes up fine, but it seems like all the other windows (PPU viewer, debugger, etc.) remember their position on the now non-existent monitor. I use
Dual Monitor Tools to force them to come back to the screen I'm using, but it happens to me continually and is momentarily confusing every time when the window doesn't appear. (If I didn't have Dual Monitor Tools it would be a lot harder to resolve, I think.)
2.
Manually marking things as "verified code" or "verified data" gives me some problems. The disassembler seems to override and display things as code anyway, and also it will get overwritten the next time that "data" is run again. I'm not sure the exact conditions that cause this but its seems like if I try to switch one of the last couple of executed lines of code to "data" the disassembler view will not show them as data, instead continuing to display them as code no matter how I mark them?
If someone manually marks something, it would probably help if that could be a sticky choice. Something like "manual code, manual data, auto", where auto would go with whatever the verified option is like it currently does.
Also the keyboard shortcuts for marking code/data (Ctrl + 1/2/3) seem to conflict with the open PPU viewer shortcuts. (Or do I have old default settings that have propagated through...?)
Is Sour lost in Mesen-S land?
Sour, here is something I don't understand about Mesen... in its RTS description Mesen's debugger states:
Quote:
...It pulls the status flags and program counter (minus 1) from the stack.
It seems to me that rts doesn't affect status flags; none of your status flags in the RTS description are even highlighted. I'm confused.
edit: I'm still using 0.9.8
No, I'm just lost in "taking a break" land at the moment - sorry for disappearing without notice for a few weeks.
I know there are some pull requests that have been done and are pending, will try to check & merge them over the weekend.
Still planning to take a break for a while longer, though - I was working on Mesen/Mesen-S constantly for over half a year, essentially spending all my free time on them, I needed (and still need) a break.
I'll get back to them eventually but it'll be a while longer. (plus, WoW Classic got released a week ago and I'm enjoying my time on it at the moment :p)
That's ok Sour, enjoy your much needed break. All of your hard work has extremely helped me in game creation!
Thank you so much; enjoy WOW Classic.
Hi Sour, if I have a LUA script which I want to run with a HD pack, is it possible to set a game specific script to run automatically as soon as the game starts? Thanks.