A user wants to use an NES debugger on a 64-bit Linux PC but refuses to use any non-free software out of principle and refuses to install any 32-bit free software (such as FCEUX for Windows in Wine) out of fear that the 32-bit support libraries will take several gigabytes and "hav[ing] better use for that space".
I recognize sticking to free software in principle. But principles can be taken too far:
- If you refuse all non-free software, how do you connect to the Internet? The firmware of your machine's Wi-Fi radio is probably not free software. For example, in the United States, FCC rules require radio transmitter manufacturers to secure the firmware against changes that could cause it to violate Part 15.
- If you refuse all non-free software, how did you get into the NES in the first place? NES games published prior to 1997 were not free software.
The workaround in that topic was to attach gdb to FCEUX for SDL and debug the
emulator instead of debugging the game.
The disk space complaint is one I find somewhat legitimate. But I need more information before I can forward it to the experts at UNIX & Linux Stack Exchange. Here's what I have so far; it's missing the distribution and the size of the disk.
How can I run 32-bit Wine on 64-bit Linux with minimal disk space?
I want to run a free 32-bit Windows application (FCEUX debugger for Windows) on my 64-bit PC with an [unspecified size] HDD running [unspecified distribution] Linux. How can I set up a minimal environment to run Wine? Would a chroot work?
I tried running FCEUX for Linux, but unlike the Windows version of FCEUX, the SDL version of FCEUX lacks any debugging capability. I don't want to run FCEUX for Windows in Windows because Windows is non-free, and I don't want to install any non-free software on my PC. I considered Wine, and I have reports elsewhere that FCEUX for Windows works well in Wine, but I don't want to install several gigabytes of 32-bit support libraries into my 64-bit operating system.
Quote:
If you refuse all non-free software, how do you connect to the Internet? The firmware of your machine's Wi-Fi radio is probably not free software.
I connect via wired Ethernet, which incidentally needs no firmware. There is burned-in microcode in the chip, which I'd like to be free, but in the absence of suitably priced options, I can live with the burned microcode. It's close to a decade old, so should it have any backdoor, the likelihood of it still working is small.
Quote:
If you refuse all non-free software, how did you get into the NES in the first place? NES games published prior to 1997 were not free software.
My viewpoint comes primarily from security, with the ethics being second.
Running games on a non-networked console, with no or very specific permanent storage, is quite different from running code on a computer that has access to both my data and the internet. One can hurt me in various ways, and one cannot. Thus running a NES game on a NES has no security implications.
Games in general are art IMHO, so the ability to modify them is not as relevant as on productivity programs. They're an experience you can only get once, similar to a book or a movie (multiplayer excepted).
If
Microsoft open sources Windows, then it might be a matter of time before I'm already doing nesdev with all free 64 bit software (I use mingw64 and compile cc65 from source) then again I think the emulator binaries I'm using are 32 bit right at the moment....
calima wrote:
Quote:
If you refuse all non-free software, how do you connect to the Internet? The firmware of your machine's Wi-Fi radio is probably not free software.
I connect via wired Ethernet, which incidentally needs no firmware.
Good point, for those people who live where it is practical. Is your PC's BIOS also free? Or the firmware of your HDD's controller?
Quote:
Games in general are art IMHO, so the ability to modify them is not as relevant as on productivity programs.
The ability to adapt them to work on a new platform is still relevant. I mention this because I've been invited to draft an e-mail to
licensing@fsf.org regarding issues related to video game software. I've already set forth a few thoughts in my article about
genres of non-free software.
Quote:
They're an experience you can only get once, similar to a book or a movie (multiplayer excepted).
Your argument for allowing non-free video games is that video games have little replay value. If that's true, it's more an indictment of AAA unoriginality and the "walking simulator" fad among indie developers than of games in general. I'd draw an analogy to golf, bowling, track and field, and weightlifting, which are in essence single-player sports, yet a player is expected to improve over time. Likewise, several video games have areas available only in New Game+.
Quote:
Is your PC's BIOS also free? Or the firmware of your HDD's controller?
Neither is, but the ability to run Coreboot stacks very highly on any new purchases.
Quote:
The ability to adapt them to work on a new platform is still relevant.
I agree, but this only applies if there's any replay value, which I find there's generally not. In a generalized game engine, such as RPG maker, Renpy or Adventure Game Studio, it makes very much sense, like you write.
Quote:
I mention this because I've been invited to draft an e-mail to
licensing@fsf.org regarding issues related to video game software.
I'm surprised, because you give off a really anti-free vibe. Why do you continue to recommend non-free software, if you claim to be a supporter?
Quote:
Your argument for allowing non-free video games is that video games have little replay value. If that's true, it's more an indictment of AAA unoriginality and the "walking simulator" fad among indie developers than of games in general. I'd draw an analogy to golf, bowling, track and field, and weightlifting, which are in essence single-player sports, yet a player is expected to improve over time. Likewise, several video games have areas available only in New Game+.
I don't find any replay value in Zelda, Mario, or other such highly regarded games, so my opinion is certainly not limited to current AAA devs or walking simulators. It's rather about what I enjoy in a game, and that I have an excellent memory. If I know what happens next in a movie, book, or game, it is no longer fun to me. I acknowledge I'm in a minority in this position.
New Game+ is a cheap cheat to get more play time, and if something is exclusive to it, I think I'd boycott that developer. Why would I suffer through hours of tedium to get that exclusive content?
calima wrote:
I'm surprised, because you give off a really anti-free vibe.
It's very funny that you would get that impression of someone that's not anti-free at all? (Of all the regulars here, I'd guess that tepples is the most pro-free.)
calima wrote:
Why do you continue to recommend non-free software, if you claim to be a supporter?
Why do you think that having anything to do with non-free software has to make someone anti-free?
calima wrote:
I'm surprised, because you give off a really anti-free vibe. Why do you continue to recommend non-free software, if you claim to be a supporter?
I support free software where it is practical. I learned GIMP instead of Paint Shop Pro or Photoshop, and I run Xubuntu on my laptop. My NES and Super NES projects are free software (save one) and built using free software (ca65 and Python+Pillow), and my Super NES makefile can be switched from a proprietary emulator to a free emulator on a suitable machine.
But I'm also pragmatically aware that the market for computing devices running a completely free software stack is not enough to sustain economies of scale. You won't find
Purism Librem laptops in stores, for instance, and they don't make a full range of sizes (no 10-11", no 17") yet. And is it more enjoyable to play-test a Super NES game at one-fourth speed with choppy sound on a free emulator or on a proprietary emulator at full speed with correct sound? Because that's the speed difference between free bsnes and proprietary NO$SNS on the Atom N450 in my laptop.
Besides, I'm trying to help you find a free software solution to this problem. FCEUX for Windows is free, and it's not even
Java trapped because 32-bit Wine is free, even if it does allegedly require you to install gigabytes of free software to support it.
Perhaps I wasn't clear, but the implication of the "genres" page was that if you don't need games, movies, or tax software, you can get along fine with only free software. Am I "anti-free" because I was lead programmer on
Haunted: Halloween '85, a proprietary commercial NES game? And even that project used non-game-specific libraries that I had previously made available as (permissively licensed) free software.
Quote:
If I know what happens next in a movie, book, or game, it is no longer fun to me. I acknowledge I'm in a minority in this position.
You're going to hate me for this:
It
makes your
car more
aerodynamic.
rainwarrior wrote:
It's very funny that you would get that impression of someone that's not anti-free at all? (Of all the regulars here, I'd guess that tepples is the most pro-free.)
I got the impression, because he repeatedly tries to troll me over preferring free software.
tepples wrote:
And is it more enjoyable to play-test a Super NES game at one-fourth speed with choppy sound on a free emulator or on a proprietary emulator at full speed with correct sound? Because that's the speed difference between free bsnes and proprietary NO$SNS on the Atom N450 in my laptop.
Is something wrong with snes9x or zsnes? I remember running them years ago, full speed on a Pentium 3, and not even trying bsnes because everyone claimed it was too slow.
That is your choice, though if it were me, I wouldn't use the proprietary emulator, I would improve the best existing free one. You certainly have the skills, taking from your online profiles.
Quote:
Besides, I'm trying to help you find a free software solution to this problem.
My problem was solved. No need to go over it anymore.
Quote:
Perhaps I wasn't clear, but the implication of the "genres" page was that if you don't need games, movies, or tax software, you can get along fine with only free software.
Well, I live in a country with sane tax laws, and do my company's accounting and taxes using entirely free software. So depends where you live.
If the software is free software then you may be able to compile it for 64-bit mode even if they don't provide 64-bit binaries. (This is one of the advantages of free software; there are also many other advantages.)
I used to use Windows but I now use Linux (the CPU of my old computer failed); this Linux is much better than Windows. However, I would let you to run these programs on whatever operating system you want and to use them with whatever programs you want to do. Since you are going to run the programs on your own computer, I certainly do not intend to stop you from doing what you want with it.
I prefer to use and write free software, however I often use a lot of different software from the common ones; I use i3-wm instead of the default window manager and desktop environment, I use xterm instead of the other terminal emulator, and for dealing with pictures I use ImageMagick instead of GIMP or whatever, and AmigaMML instead of MilkyTracker or whatever, and I use a highly customized version of Firefox.
I like the UNIX design and so write software that is based on such principles and will prefer to use such software too.
As far as I know, all native code I run on my computer is free software (with the possible exception of the BIOS and firmware). (I do use a few sandboxed non-native proprietary software though, such as some DOS games.)
There are many games which are free software (although a lot of them aren't), and a small number of movies, and no tax software as far as I know (in any case I do not understand taxes and get other people to do it for me).
About Microsoft open sourcing Windows, I have read that too somewhere, although I highly doubt it. However it seems reasonable to me that Microsoft might open source several files that would help Wine and ReactOS projects.
About non-free computer games, I think it can also depend on the game and on the VM. For example, there are different styles of computer games and VMs. A multi-player game could be played on internet you just connect to the server with a telnet client, you don't need to download their software. However, if it is a free software then you can also to download your own copy, learn the rules of the game more precisely, modify it to the preferences of your group of players, and can run it on a local network so it can even be used without an internet connection. You can have a free software implementation of Z-machine or Famicom VM or whatever, even if the program running under the VM is not free software. (The other way around is also possible; it is up to the user of the program, what they want to do.)
It is advantageous that SQLite is free software; for one thing, it even help me to find the bug in the program (which as far as I know they still have not put into their bug report system; the bug is that using auxdata together with triggers will sometimes cause functions to use the wrong auxdata because they keep track of only the line number and not which subprogram it is in).
calima wrote:
Is something wrong with snes9x or zsnes?
For one thing,
Snes9x's license has a clause prohibiting distribution for a fee, making it non-free. So switching from NO$SNS to Snes9x would not decrease the amount of non-free software on a computer.
Plus last time I checked, Snes9x and ZSNES allowed writing to video memory during draw time without forced blanking. That's NESticle-levels of inaccuracy, and video memory access during draw time is one of the most common reason that homebrew and ROM hacks work in emulators but fail on hardware. The only reason you noticed you had a bug is that FCEUX is accurate enough not to let you do that. (It's inaccurate in other ways, but fortunately not that one.)
calima wrote:
My problem was solved. No need to go over it anymore.
In that case, you're welcome to write a guide to using gdb to debug NES games. I'd be interested to read about the details.
calima wrote:
Well, I live in a country with sane tax laws
I'll assume you didn't mean move.
tepples wrote:
calima wrote:
Is something wrong with snes9x or zsnes?
For one thing, Snes9x's license has a clause prohibiting distribution for a fee, making it non-free. So switching from NO$SNS to Snes9x would not decrease the amount of non-free software on a computer.
Plus last time I checked, Snes9x and ZSNES allowed writing to video memory during draw time without forced blanking. That's NESticle-levels of inaccuracy, and video memory access during draw time is one of the most common reason that homebrew and ROM hacks work in emulators but fail on hardware. The only reason you noticed you had a bug is that FCEUX is accurate enough not to let you do that. (It's inaccurate in other ways, but fortunately not that one.)
ZSNES also has a
known security flaw that allows for arbitrary code execution on the host machine. The many eyes of open source only get you so far when your code is unreadable x86 assembly!
tepples wrote:
For one thing, Snes9x's license has a clause prohibiting distribution for a fee, making it non-free. So switching from NO$SNS to Snes9x would not decrease the amount of non-free software on a computer.
It would still be an improvement, no? A program you can modify, and verify what it's doing.
Quote:
Plus last time I checked, Snes9x and ZSNES allowed writing to video memory during draw time without forced blanking. That's NESticle-levels of inaccuracy, and video memory access during draw time is one of the most common reason that homebrew and ROM hacks work in emulators but fail on hardware. The only reason you noticed you had a bug is that FCEUX is accurate enough not to let you do that. (It's inaccurate in other ways, but fortunately not that one.)
Good point, yeah. Have you tried to optimize bsnes?
This particular bug would have been noticed on any emulator though, it would have prevented reacting to input even if the graphical glitches wouldn't have shown.
Quote:
In that case, you're welcome to write a guide to using gdb to debug NES games. I'd be interested to read about the details.
Eh, that's not really something I'm interested in. It was basic gdb usage, combined with fceux keeping the guest RAM in a pointer called RAM.
tepples wrote:
calima wrote:
Well, I live in a country with sane tax laws
I'll assume you didn't mean move.
Yeah, it was more "vote for someone who will make your laws sane".
Quote:
And is it more enjoyable to play-test a Super NES game at one-fourth speed with choppy sound on a free emulator or on a proprietary emulator at full speed with correct sound? Because that's the speed difference between free bsnes and proprietary NO$SNS on the Atom N450 in my laptop.
Use the performance profile on an Atom processor. I get 80fps on my MSI Wind, and 100fps on my NUC.
If you're only getting 15fps on your Atom with that, then it's due to user error.
Or use Snes9X v1.54 beta, which is nearly twice as fast and nearly as good as the performance profile.
Quote:
For one thing, Snes9x's license has a clause prohibiting distribution for a fee, making it non-free. So switching from NO$SNS to Snes9x would not decrease the amount of non-free software on a computer.
Only the most extreme zealots would equate closed-source, pure x86-assembly, Win32-only, $2.50 for the latest version, no$sns with open-source Snes9X, C, ported to nearly every system imaginable. Plus, last time I benchmarked the two, Snes9X was faster.
Unless calima is intending to sell Snes9X, then it's not a problem for him.
Quote:
Have you tried to optimize bsnes?
This goes entirely against the purpose of the project. I mean, you can do it anyway, but most of the lost performance in the fastest profile is due to design requirements of the most accurate profile. It would be very easy to speed things up a lot, if one were to break the accuracy profile from building with the same core.
I continue to offer to start on a new SNES emulator that has a focus on performance, but still tries to be reasonable about accuracy. We could set up an SVN or Git repository for it. I'm willing to help out a lot, but I'm not willing to do it alone.
Especially not the GUI ... I want nothing to do with that.
byuu wrote:
Especially not the GUI ... I want nothing to do with that.
Use IUP? I know it's C but having used Qt and FLTK before (and looked into other GUIs) it seriously seems like one of the easiest to use out there (at the expense of making it harder to make heavily custom controls, but an emulator shouldn't need that).
Sik wrote:
byuu wrote:
Especially not the GUI ... I want nothing to do with that.
Use IUP? I know it's C but having used Qt and FLTK before (and looked into other GUIs) it seriously seems like one of the easiest to use out there (at the expense of making it harder to make heavily custom controls, but an emulator shouldn't need that).
I can't speak for byuu, but at least for me the problem is not that the GUI frameworks are hard to use, it's that doing user interfaces is really not much fun. If you talk about implementing a UI (i.e. you already know what you want), it's mostly grunt work. If you also have to design the UI, it's not easy, but at the same time it's also not a fun type of a challenge, at least for me.
You could start with the existing GTK+ GUI of FCEUX for SDL and add things on top of that. Though apparently byuu ran into problems with certain missing or broken functionality in GTK+.
So anyway, I'll try bsnes performance again and take any problems I encounter to a new topic.
rainwarrior wrote:
It's very funny that you would get that impression of someone that's not anti-free at all? (Of all the regulars here, I'd guess that tepples is the most pro-free.)
In progressive politics, there are center left, hard left, and far left positions.
In conservative politics, there are center right, hard right, and far right positions.
And in software licensing politics, there are center free, hard free, and far free positions.
Sometimes what's "right" to one person (or to the median person) is "left" to someone else. My father once told me he thought FOX News was liberal, though less liberal than MSNBC and CNN. I may be more pro-free than many users here, but not quite so much as calima, and I think Richard Stallman takes it even further than calima. On the other side, Apple iOS is proprietary but less so than, say, Nintendo.
calima wrote:
I got the impression, because he repeatedly tries to troll me over preferring free software.
Repeatedly? And is it trolling or assessing? Besides, both Wine and FCEUX are free. A 32-bit program with source code available under a free software license is still free software.
On the other hand, Canonical seems about to prove calima right. Recently announced plans imply that Ubuntu is phasing out i686 (Pentium Pro, Pentium II, and newer 32-bit) support. (From
ubuntu-devel, via
SoylentNews) As I understand the discussion, the consensus is as follows:
- Ubuntu 16.10 and later will not have a 32-bit install disc but can be reached through net install or upgrading from Ubuntu 16.04.
- Ubuntu 18.10 and later will not have a 32-bit kernel or 32-bit libraries in the repository. All 32-bit applications, including Wine, will have to run in a container, as calima alluded with the "32-bit chroot".
This would leave 18.04, to be released in about two years, as the last version of Ubuntu where Wine is useful. Wine users may have to jump ship to another distro.
EDIT (June 2019): Canonical delayed dropping 32-bit libraries until 19.10 (
thread) and later pledged to retain those libraries needed to run Wine and Steam games.
Quote:
of fear that the 32-bit support libraries will take several gigabytes and "hav[ing] better use for that space".
This has to be the biggest oxymoron i've ever seen. No efficient program should take more than 4GB, ever, and as such, 64-bit software is completely commercial bloatware to make tech-wanabes buy it.
Modern software is just written to waste RAM as if there's no tomorrow, and the whole transition to 64-bit thing is about that. A program running in 64-bit mode is going to compile into a program that uses much more RAM, because there's longer instructions, as all pointers and some variables will take more space.
tepples wrote:
Ubuntu 18.10 and later will not have a 32-bit kernel or 32-bit libraries in the repository. All 32-bit applications, including Wine, will have to run in a container, as calima alluded with the "32-bit chroot".
Eh, Wine already uses a VM to run 16-bit programs. Wine providing a 32-bit VM is not as farfetched as it sounds.
There's a much bigger issue (OK, hypothetical for now, but...). UEFI can parse the filesystem and load the kernel for you, with the CPU already in the correct mode. Once UEFI becomes ubiquitous enough that we can just assume it's always there (and we're only a few years away from that), what are the chances that new x64 processors will just drop the legacy modes, leaving only long mode (64-bit)? I don't know how the boot process would be, but only UEFI would have to care anyway. On the other hand, it'd help simply the CPU cores, which is a big deal given how absurdly complex the architecture is already. There's a good reason to expect 32-bit x86 programs to stop working because CPUs won't support them anymore (short of emulating them).
Bregalad wrote:
This has to be the biggest oxymoron i've ever seen. No efficient program should take more than 4GB, ever, and as such, 64-bit software is completely commercial bloatware to make tech-wanabes buy it.
The data those programs may be fed could potentially blow up to over 4GB though. And don't forget that long mode provides double the amount of registers which makes it much easier to optimize programs, not to mention that because the integer registers now are 64-bit it's a lot more reasonable to do 64-bit calculations (even if you still prefer 32-bit or less for
storing most data, calculation is a whole different can of worms if part of an expression can generate large values, as happens with multiplication). Also there are enough XMM registers to fit two whole 4×4 matrices into them (and yes, this matters during matrix multiplication - and no, you aren't going to be doing all of them on the GPU, usually the GPU handles the per-vertex ones but not the per-mesh ones).
Morever, protected mode has lots of bullshit stuff honestly. Long mode is a big clean up, 64-bit support is the lesser of its advantages in practice.
There's also the
x32 ABI, which gives you the advantages of 64-bit mode in a 32-bit address space.
Sik wrote:
On the other hand, it'd help simply the CPU cores, which is a big deal given how absurdly complex the architecture is already. There's a good reason to expect 32-bit x86 programs to stop working because CPUs won't support them anymore (short of emulating them).
That doesn't feel likely. Intel removed a few things of dubious utility on the transition from 8086 -> 80186 (e.g. POP CS), and a few more things on the transition from x86 to x86_64 (e.g. BOUND), but I don't think the wholesale removal of this functionality is probable.
I mean, on Intel's x86_64 cores you can still switch from 64-bit to 32-bit to a 16-bit vm86. (AMD deprecated the vm86, but Intel still has it)
Bregalad wrote:
No efficient program should take more than 4GB, ever.
There are many applications that can benefit from large amounts of RAM.
Sometimes you can use the extra RAM for working with large amounts of media data. There's simply no "efficiency" substitute for it in a lot of situtations. 3D modelling, video and audio editing, and games all can benefit hugely from the extra RAM.
There are whole classes of algorithms, like
dynamic programming that allow order of magnitude efficiency speedups, but only if you have enough memory to match the size needs of the problem set. Memory vs speed tradeoffs. Compilers fall into this category, where better optimizations become practical with enough RAM.
tepples wrote:
A user wants to use an NES debugger on a 64-bit Linux PC but refuses to use any non-free software out of principle and refuses to install any 32-bit free software (such as FCEUX for Windows in Wine) out of fear that the 32-bit support libraries will take several gigabytes and "hav[ing] better use for that space".
Wow. People can be pretty eccentric ;>_>
I guess sometime you have to make compromises, if not you will miss good software because of principles only. If I did that, I wouldn't use virtualbox because I don't agree how oracle do business or even ubuntu based distributions because I don't like how canonical is getting microsoft like with the amazon items in local search results that occurred in 14.04 if I remember well.
As for dropping 32 bits, I read somewhere there was some confusion on the subject and what they will drop is i386 based platforms since the hardware is hard to find, not i686 which supports 32 bits too. If I refind were I found this information and if it was not a misunderstanding from my part I will post it here.
When they say i386 in that message, they mean "i386 and all binary-compatible extensions thereof", that is, i686 as opposed to amd64 which is not compatible. Canonical wants to entirely relegate 32-bit support in Ubuntu Desktop and Ubuntu Server to containers rather than the main repository within the next few years. Some of the endorsed but not fully Canonical-supported remixes, especially Xubuntu and Lubuntu, may retain 32-bit support longer, as their explicit goal includes repurposing PC hardware that can no longer run modern Windows well. And Xubuntu has never had the shopping lens crap.
I see, I misunderstood like the other people in that thread I found then.
Very happy that I tried Xubuntu, it's working quite well and easy to configure in general. So sometime it is good to go against your convictions and do a compromise.
Quote:
what they will drop is i386 based platforms since the hardware is hard to find, not i686 which supports 32 bits too.
I don't see where the 686 name comes from. Intel used 286 then 386, then 486 and then used Pentium, Pentium 2, etc... I don't think anybody ever released an "Intel 686" processor, I could be wrong, though.
Quote:
Sometimes you can use the extra RAM for working with large amounts of media data. There's simply no "efficiency" substitute for it in a lot of situtations. 3D modelling, video and audio editing, and games all can benefit hugely from the extra RAM.
All things that only a nice of people actually do regularly. Theonly thing I do in that list is audio editing, and it's really basic audacity usage and I think it works pretty well with 4GB of RAM or less.
Quote:
There are whole classes of algorithms, like dynamic programming that allow order of magnitude efficiency speedups, but only if you have enough memory to match the size needs of the problem set. Memory vs speed tradeoffs. Compilers fall into this category, where better optimizations become practical with enough RAM.
The extra RAM can then help for speed, but is never a requirement in the 1st place.
For standard office things, emails wrting and internet browsing, using so many RAM and 64-bit is completely overkill. Actually maybe even 32-bit already was overkill in the first place, but they're so used to it - they just want to sell more and more hardware no matter whether it is necessary or now. People are used to changing their computers every 3-4 years because they use Windows and their computer gets slow, but they don't even know they could reinstall the system and make it fresh again without changing any hardware.
You know, stupid question: what is Canonical's idea about what to do with 32-bit programs, wrap them in a VM automatically or expect users to run their own VM? Because I noticed sh happily passes the ball onto Wine whenever it's told to run a Windows executable, so it sounds like the wrapping could just be automatic.
Bregalad wrote:
I don't see where the 686 name comes from. Intel used 286 then 386, then 486 and then used Pentium, Pentium 2, etc... I don't think anybody ever released an "Intel 686" processor, I could be wrong, though.
Internal naming scheme, I don't remember exactly where it comes from (586 was Pentium, 686 was Pentium Pro, numbering was dropped completely after that).
I
did find this though:
https://en.wikipedia.org/wiki/Cyrix_6x86 (which is on the same generation as Pentium Pro, accoringly)
Bregalad wrote:
Actually maybe even 32-bit already was overkill in the first place, but they're so used to it
"640KB ought to be enough for everybody"
Now OK protected mode was certainly horribly overengineered (which also made its performance pretty awful), but to say that 16-bit x86 was good enough is stupid. And I already mentioned that the 64-bit address space is the lesser of the advantages from long mode, there's a lot to be gained just by switching to it even if the program uses little memory.
Then again there's the question why modern computers feel as slow as computers from a couple decades ago, but that's a different topic. Programmers getting absurdly sloppier and newer tools encouraging them to do so, in a nutshell.
Quote:
Now OK protected mode was certainly horribly overengineered (which also made its performance pretty awful), but to say that 16-bit x86 was good enough is stupid. And I already mentioned that the 64-bit address space is the lesser of the advantages from long mode, there's a lot to be gained just by switching to it even if the program uses little memory.
I never programmed any 8086 compatible machine in assembly, so I have absolutely no idea what you're talking about.
Also I am working right now on using an embedded system with a 386 compatible, but I don't think it is 686 compatible.
Sik wrote:
You know, stupid question: what is Canonical's idea about what to do with 32-bit programs, wrap them in a VM automatically or expect users to run their own VM?
Going forward, free software should be recompiled for x86-64, and proprietary software should be packaged in a
"snap" container.
Quote:
Because I noticed sh happily passes the ball onto Wine whenever it's told to run a Windows executable, so it sounds like the wrapping could just be automatic.
In theory, it could be. If Ubuntu continues to offer Wine, there are two options:
- Package Wine as a "snap" with all its 32-bit dependencies
- Run only 64-bit Windows programs, which would require ensuring that FCEUX (Win32) and FamiTracker are 64-bit clean
Bregalad wrote:
I never programmed any 8086 compatible machine in assembly, so I have absolutely no idea what you're talking about.
OK um:
- 16-bit: real mode
- 32-bit: protected mode
- 64-bit: long mode
Real mode gave you 1MB of address space, split in 64KB segments (each segment being 16 bytes apart - yeah there was overlap, what it did was basically segment * 16 + offset). Opcodes were less orthogonal as well (e.g. you couldn't do LEA arbitrarily). There were four segment registers: CS (code), DS (data), ES (extra) and SS (stack). The ES segment was mostly to help with operations working across two segments
(e.g. copying data)
Protected mode, besides making instructions more orthogonal and making the default register size 32-bit, gives you access to up to 32GB (though Windows likes to see only 4GB unless you mess with a registry entry) and provides a MMU with paging (which means a virtual address range can be mapped onto arbitrary non-consecutive physical ranges). A given segment may be up to 4GB long, and there are four rings (0, 1, 2, 3) each given different execution privileges. Segments could be code or data, and could be 16-bit or 32-bit (regarding executing code). Accessing an invalid address triggers General Page Fault aka what we call segfault. As you can imagine all this complexity means segments aren't addresses anymore, instead they're indices into a look-up table with all the segment properties, and it also meant that changing a segment register costs a very large amount of cycles (not to mention cache flushing). There were also two more segment registers (FS and GS).
...and most of that went ignored as usually only two rings would be used (for kernel and user space) and programs would normally just have three segments (code, data, stack) simulating a "flat" address space within each process and the segment registers only ever being touched by the kernel. Go figure.
Long mode gets rid of all that segment mess and just enforces a flat address space (aside from keeping FS and GS because Windows uses them to hold thread-specific data >_>). It also doubles the amount of registers (
all, not just the integer ones) and extends the integer ones to 64-bit, and gets rid of some rarely used opcodes. Essentially it's a much lean version of protected mode that happens to also support 64-bit operations =P
tepples wrote:
Going forward, free software should be recompiled for x86-64, and proprietary software should be packaged in a
"snap" container.
Er, pretty sure you misunderstood both. Snaps are meant for
all binaries (64-bit ones too), and they won't magically make 32-bit programs work out of nowhere if the kernel doesn't support it, which is the real problem from what I gather (hence the VM idea - it's literally about emulating those old programs). Only way snaps would work is if they actually included the full blown VM themselves and wrapped the program in it.
Also that won't help with stuff like DOSBox that still doesn't have a dynamic recompiler for 64-bit (so a 64-bit build is stuck with the slow interpreter), and I'm not sure how feasible is that. (on that note: somebody work on that please =P)
Sik wrote:
Bregalad wrote:
I never programmed any 8086 compatible machine in assembly, so I have absolutely no idea what you're talking about.
OK um:
- 16-bit: real mode
- 32-bit: protected mode
- 64-bit: long mode
I thought at least somebody in this topic had referred to a 16-bit protected mode, which existed in the 286. Windows 3.x was designed for it.
Sik wrote:
As you can imagine all this complexity means segments aren't addresses anymore, instead they're indices into a look-up table with all the segment properties, and it also meant that changing a segment register costs a very large amount of cycles (not to mention cache flushing). There were also two more segment registers (FS and GS).
...and most of that went ignored as usually only two rings would be used (for kernel and user space) and programs would normally just have three segments (code, data, stack) simulating a "flat" address space within each process and the segment registers only ever being touched by the kernel. Go figure.
Perhaps the cycle penalty is
why the flat address space was chosen.
Sik wrote:
tepples wrote:
Going forward, free software should be recompiled for x86-64, and proprietary software should be packaged in a
"snap" container.
Er, pretty sure you misunderstood both. Snaps are meant for
all binaries (64-bit ones too), and they won't magically make 32-bit programs work out of nowhere if the kernel doesn't support it, which is the real problem from what I gather
As I understood it, the
kernel would still support both 32-bit and 64-bit programs, but the
C library and other required userspace libraries wouldn't.
tepples wrote:
I thought at least somebody in this topic had referred to a 16-bit protected mode, which existed in the 286. Windows 3.x was designed for it.
Which isn't present on the 386 or latter so it's not relevant to modern systems. I think it's safe to assume we talk about the 32-bit protected mode unless stated otherwise =P
tepples wrote:
Perhaps the cycle penalty is why the flat address space was chosen.
It probably had more to do with the fact that all programmers wanted was just more memory. There were already real mode programs that behaved as if they were flat by doing arithmetic on segment:offset addresses (and why Intel couldn't go with their original plan of making segments not hardwired to 16 bytes granurality)
What it
did influence though is discouraging microkernels. Worse, it's why Windows 9x is so unsafe: the penalty introduced by a syscall (which requires going into kernel space) would have been gigantic when you consider how many syscalls are called over time, so they just put everything in ring 0. Ouch. (NT didn't, but it was also slower)
tepples wrote:
As I understood it, the kernel would still support both 32-bit and 64-bit programs, but the C library and other required userspace libraries wouldn't.
Then they wouldn't be talking about requiring running those programs from a VM (which you'd only do when the kernel is completely unable to run the programs on its own). I mean, Wine doesn't run 32-bit programs in a VM even though it provides all the libraries.
Sik wrote:
tepples wrote:
I thought at least somebody in this topic had referred to a 16-bit protected mode, which existed in the 286. Windows 3.x was designed for it.
Which isn't present on the 386 or latter so it's not relevant to modern systems. I think it's safe to assume we talk about the 32-bit protected mode unless stated otherwise =P
I'm not sure what you mean. My 486SX ran Windows 3.1.
Sik wrote:
tepples wrote:
As I understood it, the kernel would still support both 32-bit and 64-bit programs, but the C library and other required userspace libraries wouldn't.
Then they wouldn't be talking about requiring running those programs from a VM (which you'd only do when the kernel is completely unable to run the programs on its own). I mean, Wine doesn't run 32-bit programs in a VM even though it provides all the libraries.
The exact wording from the message was "snaps / containers / virtual machines". The first two are lighter weight than a full VM, more like a chroot or a Docker.
tepples wrote:
I'm not sure what you mean. My 486SX ran Windows 3.1.
Which also ran in real mode by default, and supported 386's protected mode if you passed the
/3 switch. (and Windows 3.11
always enabled it, effectively making 386 or later a requirement)
286 protected mode is a completely different beast from 386 protected mode and isn't supported on any CPU other than 286. Note that 386 protected mode
can run 16-bit code (depends on what you put on the segment, and really all it did was just set the default size for 16/32-bit opcodes). It also had VM86 to run real mode code under a supervisor (as 32-bit Windows would do to run 16-bit programs). But it's not compatible with 286's protected mode.
tepples wrote:
The exact wording from the message was "snaps / containers / virtual machines". The first two are lighter weight than a full VM, more like a chroot or a Docker.
OK reread it and you win =| Makes more sense.
OSDev's page on 64-bit x86 CPUs says that in the 64-bit long mode, the compatibility mode for running 32-bit programs alongside 64-bit ones also supports 16-bit protected mode. This leads me to believe that support's still there, and has always been there 286-onward.
All modern x86 CPUs are almost completely backwards-compatible with older x86 CPUs, including 286 protected mode. There are a few situations where things aren't compatible, and they fit easily into three categories:
1. Software that relies on undocumented behavior. Things like "pop cs" were never explicitly documented by Intel, so no one should have been using them in the first place.
2. Software that ignores the documentation. A lot of 286 protected mode operating systems put important data in areas that Intel documented as "reserved, must be zero". The 286 ignored that reserved data, but the 386 uses it to control 32-bit protected mode. This may be the reason why it seems like 286 protected mode doesn't work on newer CPUs - it's actually because the developers ignored the manual. (Software developers ignoring Intel's documentation is a common theme in PC history...)
3. Software that relies on behavior that should have been backwards-compatible, but isn't. I know of exactly one
compatibility-breaking change to the x86 architecture. No one ever seems to bring up anything in this category when discussing x86 backwards compatibility issues.
Joe wrote:
(Software developers ignoring Intel's documentation is a common theme in PC history...)
Not just software! 8259 IRQs 0 through 7 (interrupt 8-15) were configured to collide with bunch of CPU-internal faults, because interrupts 0 through 0x1F were supposed to be reserved for the CPU. (And the original 8088 didn't
yet have anything above interrupt 4.)
Joe wrote:
1. Software that relies on undocumented behavior. Things like "pop cs" were never explicitly documented by Intel, so no one should have been using them in the first place.
see also: LOADALL
LOADALL is kind of a funny case.
Intel documented it, but the documentation wasn't available to the public. Microsoft definitely had access to a copy of that document, which is why Microsoft's software only uses LOADALL when the CPU is a 286.
I have numbers. Numbers don't lie.I love numbers almost as much as π.*
I just installed Xubuntu 16.04 (amd64) on a blank SSD. This puts me in an unusual position to tell exactly how much space support for 32-bit free software on a 64-bit system takes. First I installed some 64-bit free software useful for participating in NESdev:
Code:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential gimp git python3-pil idle3 python3-numpy hexchat fceux retext
Then I ran this:
Code:
sudo apt-get install wine
The result is three-fourths of a gigabyte.
Code:
Need to get 192 MB/192 MB of archives.
After this operation, 735 MB of additional disk space will be used.
But at this point, much of this could be rendered moot by
Mesen for GNU/Linux.
* Yes, π is a number too.
I don't see the point of principles like "free/open source software only" except for the sake of posturing, and at that point you've already lost the patient. It just seems so needlessly self limiting, but maybe I'm just utilitarian like that.
You cannot tell what a binary blob does, it is a security nightmare that could do anything from cryptolocking your files to installing Windows 10.
You could do the same thing you do with proprietary NES games: contain them. Once contained, proprietary software isn't substantially more dangerous than
underhanded free software. If proprietary software can't write outside its chroot, for example, it can encrypt only those files that get served writably into its container. And the container's administrator can set a quota for
/home so small that the only way Windows 10 gets installed is by
adding a decimal point.
I still don't see how 32-bit free software, such as FCEUX (debugging version) in Wine in multiarch Linux, is likewise "a security nightmare".
I run mostly free software on my laptop, both 32-bit and 64-bit, with a small set of carefully chosen exceptions. A few weeks ago, I replaced my laptop's HDD with an SSD and installed a fresh copy of Xubuntu. So I decided to keep a
list of all programs that I have installed. As far as I can tell, only these proprietary programs are installed on my laptop's SSD:
- The firmware that bcmwl-kernel-source installs
- Dropbox and Skype, needed to communicate with clients
- A set of freeware fonts, including Microsoft core fonts (which run either in a bytecode emulator or with autohinting)
- Scripts in HTML documents, which are restricted by the browser's sandbox, the same-origin policy, and Disconnect's blacklist
- Programs that I make that haven't been distributed to the public yet
- Programs that I make for clients, which must be proprietary because of industry constraints
- Programs made by other forum members, which I am evaluating
calima wrote:
You cannot tell what a binary blob does, it is a security nightmare that could do anything from cryptolocking your files to installing Windows 10.
I really honestly hope that you built your own computer. Not just "putting together parts", but literally designing every single piece of hardware in there. Otherwise, you can't trust ANYTHING.
Yeah, I've heard that firmwares for example can be extremely dangerous...!
calima wrote:
tepples wrote:
And is it more enjoyable to play-test a Super NES game at one-fourth speed with choppy sound on a free emulator or on a proprietary emulator at full speed with correct sound? Because that's the speed difference between free bsnes and proprietary NO$SNS on the Atom N450 in my laptop.
Is something wrong with snes9x or zsnes?
That depends on whether there's a debugger for Snes9x other than (a) Geiger's or (b) debugging the emulator as an indirect means of debugging your game.
tepples wrote:
A user wants to use an NES debugger on a 64-bit Linux PC but refuses to use any non-free software out of principle and refuses to install any 32-bit free software (such as FCEUX for Windows in Wine) out of fear that the 32-bit support libraries will take several gigabytes and "hav[ing] better use for that space".
If higan had debugging features, that would certainly fit the bill. Other than that wine seems to be the only option.
Why are FCEUX's debugging features only on win32, anyway? Do they use Visual Studio's C++ API features?
nicklausw wrote:
Why are FCEUX's debugging features only on win32, anyway? Do they use Visual Studio's C++ API features?
All of the debugger GUI components are written exclusively using the Windows API.
Which, if I recall my TAS-emulator history right, was because it was that way on Gens, from which they took said parts.
Revenant wrote:
nicklausw wrote:
Why are FCEUX's debugging features only on win32, anyway? Do they use Visual Studio's C++ API features?
All of the debugger GUI components are written exclusively using the Windows API.
I've been tempted more than once to take the time to start writing GTK versions of the debugging components. But after checking out the code and starting to dig through it, I always end up shrugging my shoulders and installing WINE.
calima wrote:
You cannot tell what a binary blob does, it is a security nightmare that could do anything from cryptolocking your files to installing Windows 10.
If you are going to be so jailed by your own freedoms, then you'd be best off writing your own tools only.
Incidentally, I'm typing this in my own web browser, while my own PDF reader and stock watching program are open
More about the lacking functionality in existing programs, but hey.
From my POV, it's simply stupid to give trust where none is deserved. You're risking everything on your computer just to run a proprietary program. If that program really is worth all your data, and what can be done with it, then do use it.
calima wrote:
You're risking everything on your computer just to run a proprietary program.
Which is why one runs it in a container that cannot read or write anything outside the container.
calima wrote:
You're risking everything on your computer just to run a proprietary program.
Are you running software as root? You may enjoy
Jails.
Indeed, at the cost of that container/VM.
Quote:
Are you running software as root? You may enjoy Jails.
Of course not. But I'm aware bugs exist. The kernel I'm running on has easily hundreds of privilege escalation bugs, as well as bugs allowing one to escape a container.
You mentioned coreboot/libreboot earlier. A few PCs are compatible with libreboot. For everything else, the free operating system relies on proprietary firmware components, such as ACPI BIOS or UEFI. These act as the "container" for a free operating system, and like any other container, they may have defects that allow escalation.
tepples wrote:
A few PCs are compatible with libreboot. For everything else, the free operating system relies on proprietary firmware components, such as ACPI BIOS or UEFI. These act as the "container" for a free operating system, and like any other container, they may have defects that allow escalation.
What about the small embedded programs and microkernels on the dozens of embedded devices that make a computer's peripherals work? I speak of peripherals like NICs, memory controllers, USB controllers, sound cards, et cetera. Where is the line of trust drawn, and with what justification?
A regular NIC, sound card, etc does not have access to memory. Even a thoroughly malicious NIC is limited in what it can do.
calima wrote:
A regular NIC, sound card, etc does not have access to memory. Even a thoroughly malicious NIC is limited in what it can do.
A NIC has access to certain particularly important data - that which enters and exits the computer through it.
You have no idea how the Intel Management Engine in most intel CPUs from the last ten years interacts with any of these peripherals.
All PCI(-E) cards can be bus masters and they usually are. Typical NIC writes packets to main RAM itself without CPU involement, driver just tells where exactly (and if that NIC happens to have a boot ROM it can do stuff before OS and after BIOS). Same deal with sound cards when they are recording aswell as any USB3 devices (USB2 and 1.1 are polled only, they cannot dump data whereever they want on their own). If the hardware wants it can read or write any part of the system without any involvement of CPU or knowledge on OS side that something happened.
mikejmoffitt wrote:
You have no idea how the Intel Management Engine in most intel CPUs from the last ten years interacts with any of these peripherals.
No, I'm quite familiar with that. That's why I don't buy or recommend any such Intel CPUs.
The IOMMU prevents PCI cards reading arbitrary RAM, and I don't have USB 3 hardware.
When were Intel Management Engine and VT-d (Intel's name for IOMMU functionality) introduced? Which was before the other?
calima wrote:
mikejmoffitt wrote:
You have no idea how the Intel Management Engine in most intel CPUs from the last ten years interacts with any of these peripherals.
No, I'm quite familiar with that. That's why I don't buy or recommend any such Intel CPUs.
The IOMMU prevents PCI cards reading arbitrary RAM, and I don't have USB 3 hardware.
You can't be familiar with it because nobody is. I'm not questioning that you know about it, just what
precisely it is up to.
I don't know which CPUs this leaves you with. AMD has had similar bits in their processors since 2013, and you have to go back quite far to get an Intel machine without it.
You can sidestep almost all of these issues with a computer isolated from any networking, allowing it to communicate only over thoroughly observable I/O at the user's request. But of course, that's absolutely silly when we're talking about developing NES games.
Until fairly recently, Nintendo used to require that authorized developers have a dedicated office not attached to a residence. I wonder if an air-gapped workstation used to be required. I know it's at least possible because I first tried NESdev back in 1999 with an air-gapped workstation, with an Iomega Zip drive as the only way to get files on and off the thing. I downloaded DJGPP, Allegro 3, x816, NESticle (yes, there was a time when it was the cutting-edge NES debugger), and other tools to a Zip disk on one dial-up-connected computer (a Mac), moved the drive to another computer (a 486 PC), and installed them on the other computer's HDD.
But the root of all this was unwillingness to install (free) Wine to run (free) FCEUX for Windows because of hundreds of MB of (free) i386 libraries it would pull in.
tepples wrote:
I wonder if an air-gapped workstation used to be required.
It was not.
TmEE wrote:
All PCI(-E) cards can be bus masters and they usually are. Typical NIC writes packets to main RAM itself without CPU involement, driver just tells where exactly (and if that NIC happens to have a boot ROM it can do stuff before OS and after BIOS). Same deal with sound cards when they are recording aswell as any USB3 devices (USB2 and 1.1 are polled only, they cannot dump data whereever they want on their own). If the hardware wants it can read or write any part of the system without any involvement of CPU or knowledge on OS side that something happened.
So, like what the NES expansion port (or
SNES expansion port) could do.
Though, the NES lacks address bus and the SNES lacks full address bus, though one can still run code to modify anywhere.
Not quite, because you cannot do bus mastering on NES or SNES, the CPU or some part of hardware in the console must do the transaction, and that's fully under CPU control. A Bus master can do anything to the console whenever it wants. US/EU Master System (but not Mark III or JP Master System) have means to do bus mastering, technically you can make a game cart that takes bus from the CPU and runs the entire game off the cart with CPU perpetually frozen pretty much controlling all the hardware in the console as it pleases. That's partially what one game I'm working on does, it uses a small MCU to do DMAs and few other things to fully realize the VRAM bandwidth that the machine has (Z80 can only use up about 10% of what is available).
I'm not yet familiar enough with the IOMMU mechanisms that are present in x86 world to comment more deeply, but from first glance they're primarly for OS side of things to allow virtualisation with hardware assist, with some restrictions on hardware side of things to work, aka not universal. AMD spec seems to be superior to that of Intel's, but for now I will stop digging, I don't have time to read though 500 pages of material, especially since it doesn't benefit the work I'm supposed to be doing right now lol.
You have access to the databus. Inject a converging-clockslide-jump to un-mapped addresses (as sort of in the linked example), and/or continue driving the lines over whatever else is trying to use them.
Since you cannot tristate the host bus on will (I cannot see any such signal on the schematics), you only cause a bus fight which is harmful to both sides. To do it without harm you got to control the enable signal going to the cart so that you can disable cart on the relevant addresses and supply your own data...
mikejmoffitt wrote:
I don't know which CPUs this leaves you with. AMD has had similar bits in their processors since 2013, and you have to go back quite far to get an Intel machine without it.
Currently running a Phenom 2 from 2010. Considering getting the Pinebook when it ships, and would have gotten the Raptor if its price had been more in my budget.
calima wrote:
If I know what happens next in a movie, book, or game, it is no longer fun to me. I acknowledge I'm in a minority in this position.
For the record, the opposite end of the spectrum is
this Slashdot user.
As for the topic: Nowadays, if you're willing to install 64-bit Mono, Mesen should be mature enough. The problem comes when you branch out to Game Boy, where one best of breed debugging emulator (BGB) is proprietary, and the other best of breed debugging emulator (SameBoy) is Java-trapped because it relies on parts of Cocoa (macOS API) that GNUstep hasn't replicated, such as audio.
Joe wrote:
There's also the
x32 ABI, which gives you the advantages of 64-bit mode in a 32-bit address space.
Good for embedded, where everything can be rebuilt for x32. Apple Watch uses an analogous ABI with 32-bit pointers on AArch64. But it's not quite as suitable for desktop because with x32 you need
three sets of libraries. Assuming everything whose developers are willing to build and test on x32 is x32, you still need the x86-64 libraries for x86-64-only applications and the x86 libraries for x86-only applications. Its failure to catch on is part of why
Linux kernel developers had begun to consider dropping it by December 2018.
Sik wrote:
what is Canonical's idea about what to do with 32-bit programs, wrap them in a VM automatically or expect users to run their own VM?
Canonical expects each application's publisher to offer a snap with the application and the 32-bit core libraries. (A snap is a container, which is lighter weight than a full-on VM.) If your application's publisher went out of business in 2010, too bad.
tepples wrote:
Run only 64-bit Windows programs, which would require ensuring that FCEUX (Win32) and FamiTracker are 64-bit clean
Discussion of this with respect to FCEUX continues in the thread:
Catalina Wine KillerWith respect to FamiTracker: jimbo1qaz has announced in the FamiTracker Discord server an intent to resurrect cpow's port of FamiTracker to Qt.
With respect to bgb, a Game Boy debugger: beware is providing experimental Windows x64 builds.
calima wrote:
bugs allowing one to escape a container.
Such bugs exist in ZSNES. A Super NES program using one of the coprocessors can launch native i386 code using an out-of-bounds DMA.