So I get up this morning and am treated to the sounds from the server clicking and banging, the primary HD, naturally. Everything should be safely backed up, not too worried about that, however the PC itself seems to be having issues (e.g. not POSTing and if it manages to, won't boot from a disk anyways). So i'm in a little bit of a bind. I can probably cobble something else together from old parts but I'm thinking maybe its time I just get it running on a real web server instead of from my house.
It was running apache for web server, mysql for database, and php for scripting. Anyone have any recommendations for a service offering this setup?
I can recommend/suggest using Vultr.com for a VPS system (what's called a "Compute Instance"), which you can install whatever Linux/BSD distro on and then install apache/mysql/php per whatever methodologies you have in the past, starting at about US$5/month, depending on your disk capacity needs. Migrating the MySQL data should be fairly easy (mysqldump).
Other places I've tried for VPS (cheap) and/or dedicated server (expensive):
* ARP Networks -- excellent, especially for BSD. Extremely well-managed network (top-notch peering), support is quite good (24 hour response time), and the owner (Garry) is extremely friendly. On the downside, disk I/O is extremely slow
* cari.net -- good experience, though I only used them for a dedicated server (expensive). Overall I'm happier with Vultr though.
* ComfortVPS -- bad experience. From my notes circa 2014/03/02: dogshit slow, boot-up times ttook full minutes, kernel device probe took forever, while during certain times of the day things would perform decently; indicates oversubscribed hypervisors.
* Linode -- quite good (esp. for Linux), inexpensive, easy to use. Stay away from their Fremont site, however -- that's hosted entirely by Hurricane Electric, who has an extremely long track record of awful service (
my story -- and yes, the situation is still the same as of 2016).
* RootBSD -- I forget what my experience with them was like, sorry to say
* vr.org -- Not entirely sure about reliability/etc. -- I couldn't get their VNC console capability working no matter what I did, and their Support kept insisting it was because "I was using a PC and not a Mac" (yeah, uh, bye).
As for Vultr -- they're quite good, but I have had several issues with them (either networking-related or hypervisor-related). They tend to fix things quickly. The one big negative with them is that their support/admin staff doesn't seem to understand how to do maintenance of HVs correctly: the guests (VPSes) are powered off abruptly, rather than issued ACPI shutdowns (i.e. expect your filesystems to come up dirty after they do HV maintenance or have an HV outage -- strongly recommend using a checksumming filesystem because of this, but if you can't, bare minimum use something that's journalled (e.g. ext4, XFS, etc.) so that at least you don't have filesystem corruption). I've talked to the guy who manages their support or admin staff, and supposedly they've all been trained to not do that any more (he confirmed they were in fact abruptly shutting things off and would issue ACPI shutdowns going forward), but the last time they did HV maintenance my stuff came up with dirty filesystems. That said: their disk I/O is fast (them bragging about use of SSDs is certainly true), and CPU doesn't feel oversubscribed. They also offer disk snapshot capability, and you can restore a snapshot in another datacenter too (e.g. if you wanna move from one datacenter to another). You can also get them to use whatever ISO image you give them (I forget if it's an HTTP upload or a URL to an ISO) for installation, so you don't necessarily have to use the OSes they pre-stock.
Well this has turned into a god damn nightmare. The primary drive has mechanical problems for sure, the head will bang on the side a few times, wait a few seconds and repeat. I pulled the backup drive to try and pull data from it, but the damn thing wont mount in OS no matter how I try to hook it up. It sounds normal, but for whatever reason wont initialize completely. I also had a weekly backup schedule to a network drive on a different PC, but wouldn't you know it, it has not been running as scheduled
The best I have right now is a hard copy from the last time the HD failed from Sept 2013. Granted the site hasn't been very active the past couple years, this is still beyond disgusting. It seems my only option is to send the drive in for recovery and spend big $$$ that I don't really have anyways
Anyone have experience with data recovery services?
What you're hearing is most likely the actuator arm resetting into the landing zone when trying to find sector 0 and/or the HPA region. Could be a bad/unreadable sector, or it could be a head that has gone bad. One thing *not* to do is run any "recovery software" on the origin/source drive, no matter what anyone online tells you -- trust me on this. It will very likely make the situation worse if you need to rely on a data recovery company. Most companies will want between US$500 and US$2000 for recovery, depending on what the nature of the problem is. A slightly less expensive company is
http://300dollardatarecovery.com/ but I haven't used them so I can't attest to their success rate or quality of work.
I would suggest trying to get the data off the backup drive, since that sounds like it's functional/working but isn't accessible for some odd reason. It's at least (probably) in better shape given that it's not making mechanical noises. :-)
I've done data recovery for several people (I do it all with software, and don't modify the source/origin drive assuming I'm able to get it working). I don't do any physical modifications to the drive (aside from maybe a PCB swap, if I deem that worth trying). I recently did data recovery for a friend of Gideon_Zhi's who had two WD Passport drives which went bad (these don't provide SATA interfaces even on the PCB, it's all pure USB); I was able to recover 100% of the data on one, but the other had mechanical problems (I don't do head stack replacements, etc.). Details of my work are public:
here (successful) and
here (failed). I do all this for free. If you'd like, you can mail me the drives and I can see what I can do. I'd need to know drive models, capacity, partitioning details (if any; particularly MBR vs. GPT, if you know), and filesystem type.
Highly recommend
http://300dollardatarecovery.com/ -- There are people out there trying very hard to give this company a bad name, ignore any bad reviews you read online.
I have had many drives recovered by them (>10) and only two have had to been sent to a third party.
If you want assistance and advice on where and how to host the website, or even want someone to manage it for you, send me a PM. I host most of my sites between OVH and HostGator, highly recommend picking a host that can provide VPS with cPanel.
I have heard of $300 data recovery before, and I had read a lot of bad things about them elsewhere. Hard to know what to believe these days. 2 others I have been looking into is Gillware and Fields Data Recovery. Price range for them is ($200 ~ $900) but they both diagnose the drives for free and tell you exactly what can be recovered before your required to pay anything.
Here's my recommendation:
- Set up NesCartDB on a VPS (or even a shared host like HostGator) with the September 2013 data set.
- Going forward, make XML dumps available on a regular schedule so that other NESdev members (such as myself) can help you with data backups. I have some experience with this, having written a bot to scrape a MediaWiki site through its API and produce a static copy of NESdev Wiki for offline reading.
- Going forward, weaken the image hotlinking protection so that other NESdev members (such as myself) can help you with image backups.
- Once at least something is up and running, see what can be recovered of the past three years of submissions.
Well if you do have to go that route, I wouldn't mind making a small contribution. Pm me if you're interested.
Ditto. PM me if you need funds related to the recovery operation.
On the backup drive which won't mount; have you tried seeing if you can get to the partition via testdisk?
People in the US generally can't run servers from their house without violating their cable company's terms of service.
koitsu wrote:
Other places I've tried for VPS (cheap) and/or dedicated server (expensive):
* ARP Networks -- excellent, especially for BSD. Extremely well-managed network (top-notch peering), support is quite good (24 hour response time), and the owner (Garry) is extremely friendly. On the downside, disk I/O is extremely slow.
...
Hey guys,
I'm the extremely friendly owner of ARP Networks, Garry Dolley.
I just wanted to say thanks, koitsu, for the kind words. I also wanted to clarify something regarding our disk I/O. We are currently transitioning away from our older systems, with I/O that was great at one time but hasn't kept up with competition, to much faster systems with SSD-backed storage and a Ceph cluster. Soon, all users on our platform, old and new, will enjoy I/O speeds even faster than the competition, and I look forward to providing this to everyone this summer.
--
Garry Dolley
ARP Networks, Inc.
https://arpnetworks.com
The drive that wont mount is problematic to try testing on because just having it installed will prevent system from booting. I've had it in a XP and a Win7 machine and both will stall indefinitely during the boot process while initializing disks. I've got an eSATA enclosure, putting it in there will cause the enclosure to continually try to re-initialize. I tried hot swapping it into an XP machine and "scanning for hardware change", the disk shows up in device manager but HD activity light stays on while system continually tries to mount it and any disk related programs seem to not want to start because the OS busy. Not sure if this implies a hardware problem of some sort or if corruption to a critical area of this disk could cause this. The BIOS on the XP machine has a basic SMART test function that runs for about 2 minutes and the disk makes normal activity sounds and passes the test.
I think i'm going send in the primary to the $300 guys and see what they can do. I appreciate the offers for help financing the recovery, as money for extra stuff like this is a little tight right now. Will wait until a price is settled and I'm told data recovery is feasible before I accept any donations though.
What happens if you boot from Xubuntu on a USB flash drive and then try to mount the problematic drive read-only?
I haven't tried any linux based approaches, and I have very little experience with linux based OS's in general. If there is a ready to go image file that I can just drop on a USB drive or burn onto a disc, I guess it's worth a shot.
Purely some educational responses -- you've got your game plan (sending the primary to the $300 recovery place), stick with it. The tools a data recovery company use won't be "attach the drive to a Windows PC and let it go" (it may be "attach a drive to a Windows PC using PC-3000 and its controller and then begin seeing what the behaviour is". PC-3000 lets you control everything down to the SATA signalling (yes, the signalling!)).
1. Windows does a lot of bullshit I/O to a drive without giving the user any indication it's doing so. Linux, FreeBSD, etc. don't generally do this; they tend to issue ATA IDENTIFY to get drive parameters and characteristics, followed by reading LBA 0 (and deciding to read MBR or GPT), followed by what's called "tasting" (looking for partitions and partition identification), and that's mostly it. You can turn off "tasting" in a lot of situations (at least on FreeBSD). Rephrased: when dealing with misbehaving or questionable disks, you want as *little* I/O done to the drive by the OS as possible. With actual data recovery suites like PC-3000, basically no I/O is done to the drive until the operator tells it to. There's a reason for that. :-)
In short, Windows tends to give the user absolutely no indication (or way to find out) what it's *actually doing* under the hood. It sucks for troubleshooting. Furthermore, Windows also wants to do a lot of things "automatically" -- a common one is to prompt a user for drive reinitialisation (read: repartitioning + formatting) even sometimes in I/O timeout scenarios (varies per SATA chipset). It's very dangerous. Automatic CHKDSK or journal recovery is another one that can cause all sorts of problems once I/O to a drive starts working -- it's very hard to get your data back after that (this is one of several reasons why not to work on source/original material! -- No data recovery company does that).
2. The drive being put into an eSATA enclosure (and continuing to act questionably) is a good test, but usually this isn't necessary. In the recovery industry, they actually do the opposite -- take the drive out of such enclosures and work on it via whatever native I/O interface it has. Enclosures contain SATA/USB bridges most of the time (some of the SATA/eSATA ICs used are considered this because they actually have USB support, regardless if the interface is wired up or not), and those bridges tend to do very questionable things (they literally intercept every ATA command and payload sent to the drive, often filtering out some and/or rewriting several -- they also commonly lie about drive capacity (LBA count)). The worst of the bunch are ones which support RAID.
Regardless, you doing this as an additional test (to rule out interface problems) is understandable, so don't feel like your troubleshooting efforts for were naught.
3. Hot-swapping is another worthwhile technique, as long as you know the interface you're hooking it up to has proper hot-swap support (rephrased: just because something has a SATA port doesn't mean hot-swap will function correctly). I've actually done this on machines which don't have proper hot-swap backplanes and caused physical sparks when hooking up the SATA power connector (there are capacitors and other things involved in actual hot-swap backplanes that keep this from happening) followed by the machine abruptly powering off. Scary.
But with a proper hot-swap drive bay or backplane, yes, it's sometimes worth trying. Occasionally (read: rarely) I have gotten drives which wouldn't allow I/O on a cold boot to actually work this way (i.e. attach drive + SATA power cable, but no SATA data cable; wait 30-60 seconds, then attach SATA data cable).
4. The HDD activity light staying on just means the SATA chipset (either in the enclosure, or the one natively attached to your motherboard, depending on how you have the drive hooked up) hasn't been told to shut the LED off. In other words: an ATA command or series of commands have been issued to the controller, the controller has issued them to the drive, but the controller is still waiting for response for the CDB. The question then becomes "what CDB did the OS send to induce that?" For example, maybe reading LBA 0 works, but reading LBAs that make up the primary GPT area cause the drive to go catatonic -- in which case, you need to be able to tell the OS to skip reading the primary GPT and skip immediately to the secondary (at the end of the drive). It's just one of hundreds of possibilities of course.
Figuring out if a specific LBA is what causes a drive to go catatonic/lock up or not, or if it happens even on something like ATA IDENTIFY, is important. I've done recovery (see those links I posted before) where a drive would go catatonic on *any I/O* past about 5-6 seconds of being powered on, and I was able to find a very clever workaround which let me work around the problem and allow for 100% data recovery. I actually streamed it live too (it was several weeks ago so it's no longer on Twitch, sorry to say), so some friends/viewers found it interesting.
5. "SMART checks" inside of system BIOSes do not do actual verification of drive functionality. What they do is issue an ATA CDB to the drive that asks the drive itself for the "SMART health status" and the result is either "OK" or "BAD". I'm keeping it simple here. The drive itself decides this based on SMART attribute thresholds being crossed. It is NOT a drive test, nor is it the same as a SMART short/long/conveyance/selective test. All the BIOS setting does is inform you whether or not the "overall SMART health status" is OK or not, and this normally takes about, oh I don't know, a few milliseconds. SMART actually has tons of very useful features for testing and analysis, but they aren't something a BIOS makes use of. I have *extensive* familiarity with SMART, so I could talk about this for days. (I'm kind of the "go-to guy" on dslreports.com for disk and storage issues)
The good thing you should know: if the SMART health check took 2 minutes, then what that means SMART RETURN STATUS either a) took 2 full minutes to return successfully (which means it's VERY slow and that indicates something anomalous, but that I/O to the drive can/does work in some circumstances!) or b) took 2 full minutes to return and internally timed out and the BIOS didn't report such (very possible -- I can't tell you how many BIOS bugs I've found, heh...).
Really, a review of SMART attributes on the drive would be most helpful, but the attributes are -- ready? -- stored on the platters in a reserved area, so if one of those has gone bad, and the drive has problems with LBA reallocation within reserved areas, then this could actually block the drive from returning anything (until power-cycled).
Anyway, I think that covers that part of the "SMART check". I could go into more details but eh, it doesn't help much at this point (if I have the drive it'd be more definitive).
I will say this much, though: it's very likely your drive has a fully function/working firmware (guess where that's stored too, on most drives (but not all)? You guessed it: platters!), but that something like the reallocation table has become corrupted or gone bad in some way (this is what I dealt with on the earlier drive I mentioned where I got 100% recovery), or that there's a specific LBA/sector that the drive locks up when attempting to access (this is incredibly common).
Like I said, this post was purely for educational/learning purposes, it doesn't help get you any closer to getting the data back. The data recovery company, on the other hand, should take care of that for you. PLEASE LET US KNOW HOW IT GOES! I always want to know what data recovery services work for people and don't -- I like being able to refer people to one or more, especially for things I can't do recovery on (i.e. physical/mechanical problems).
Pretty nice explanation koitsu!!
Could you suggest some readings about this subject? It's very interesting and I would like to know more.
Well, as a suggestion, I second the one that told you to try to boot Ubuntu and mount the drive that seems fine. It happened many times with me (Windows won't mount but Linux did) that's one of the reasons a left Windows a while ago. I also suggest you to try Ubuntu Mate, since it has a lighter graphical interface. Ubuntu has some problems with older hardware, if that's the case, but it's really easy to use!!
Other thing that I second is the use of Testdisk! Man, this program is excellent!! It saved me and some friends just many times
and there's a Windows version too, just make sure you get yours from
http://www.cgsecurity.org, to be safe.
Another suggestion is to host Bootgod's site together with NesDev. Could this be possible? I think it would be nice to have bootgod.nesdev.com!! So Bootgod would be part of the staff (if he is not already).
Fisher wrote:
Pretty nice explanation koitsu!!
Could you suggest some readings about this subject? It's very interesting and I would like to know more.
Sadly there isn't much information of this sort "available to the general masses" for two reasons: 1) it's fairly low level (bordering on obscure), and 2) data recovery companies
really don't like it when people give out anything even remotely close to their "secret sauce". I've been tempted to blog about the latter, because as I found out doing recovery myself, there are actual people in the data recovery industry who
intentionally will try to sabotage your efforts (apparently including, in one case, a guy who sent his drive to a person who worked for a data recovery company to help him with the recovery, who proceeded to intentionally brick the drive so that the fellow *had* to pay for data recovery services), or bare minimum insult you to the point where you lose faith. The HDDGuru forum is filled with these people; it's why a couple of the more helpful and intelligent users created the HDDOracle site.
I got my knowledge via a couple decades of dealing with bad ATA and SCSI disks, examining behaviour of ATA and SCSI drivers (in FreeBSD mainly), talking to driver developers, and actually reading as much documentation as I could find (including T10 and T13 specifications for both protocols).
I do not work in the data recovery industry. I'm just a UNIX SA who knows a *lot* about "everything storage".
Well, as a suggestion, I second the one that told you to try to boot Ubuntu and mount the drive that seems fine. It happened many times with me (Windows won't mount but Linux did) that's one of the reasons a left Windows a while ago. I also suggest you to try Ubuntu Mate, since it has a lighter graphical interface. Ubuntu has some problems with older hardware, if that's the case, but it's really easy to use!!
Fisher wrote:
... Other thing that I second is the use of Testdisk! Man, this program is excellent!! It saved me and some friends just many times :-) and there's a Windows version too, just make sure you get yours from
http://www.cgsecurity.org, to be safe.
I warned about this kind of (extremely bad) advice already in my earlier posts:
koitsu wrote:
... One thing *not* to do is run any "recovery software" on the origin/source drive, no matter what anyone online tells you -- trust me on this. It will very likely make the situation worse if you need to rely on a data recovery company.
... (this is one of several reasons why not to work on source/original material! -- No data recovery company does that).
TestDisk from CGSecurity will make modifications to a drive in attempt to do recovery.
DO NOT USE THIS PROGRAM ON THE DRIVE WHICH IS MALFUNCTIONING, especially since the drive is going to a data recovery company! I cannot stress this enough! You **never** want to issue any writes (NOT EVEN ONE!) to a drive which is malfunctioning. All of these programs (examples: TestDisk, SpinRite, Recuva, ZAR, GetDataBack) will issue writes to the drive when trying to "recover" data (SpinRite is the worst of the bunch).
The only time you want to run these kinds of programs on a malfunctioning drive
is if you have NO PLANS to use a data recovery company, have no problems with the strong possibility of things being made worse, or just want to do it for educational purposes. That's it. (Data point: I am a paid customer of ZAR and GetDataBack, and I do use these utilities from time to time when recovering NTFS/FAT data from drives which have already undergone being duplicated/copied).
Fisher wrote:
Another suggestion is to host Bootgod's site together with NesDev. Could this be possible? I think it would be nice to have bootgod.nesdev.com!! So Bootgod would be part of the staff (if he is not already).
That could be made possible already by simply adding a DNS CNAME record (
bootgod.nesdev.com IN CNAME bootgod.dyndns.org.). You'd still have to visit it via
http://bootgod.nesdev.com:7777/ though because he runs the website on an alternate port. He also might have to modify his Apache configuration (depending on how he set it up to begin with) to allow Host: headers matching bootgod.nesdev.com.
But really, think about it for a moment: none of that really matters. People won't care if the site is at bootgod.nesdev.com or bootgodssupercooldb.supersnakesonaplane.net or anywhere else. My point is that sticking it under the nesdev.com domain just makes it "vanity", which is something that should've died in the early 2000s (people wanting hosting or hostnames under a specific domain because "it looks cool"). :P But yes, it's completely possible to do.
Furthermore, if he was to be hosted by 8bitalley (who hosts nesdev/etc.), that's putting even more eggs in one basket (it's all hosted on one machine, etc.). If this situation has taught us (as a group) anything, it's: 1) check and test your backups regularly (I recommend doing a full bare-metal restore once a year), and 2) have some form of redundancy in place -- or if not, at least have a "contingency plan" for situations like this (i.e. an alternate hosting provider to whom you can rsync a backup over to and get things back up and running in a day). I'd honestly recommend just doing hosting at a VPS provider and paying US$5-10/month for service, then doing backups by rsync'ing, and/or SSH+mysqldump'ing the data off the VPS (and storing it at home, and/or somewhere else) on a daily basis, or maybe even twice a day. That's what I do with my VPS, and have done for years (and yes I've had to recover a few times).
One reason that I make the offline HTML snapshots of wiki.nesdev.com is to test my own backup of the wiki. After this situation, I'd end up doing the same for a revived NesCartDB once XML dumps are made available regularly.
I suppose it's worth asking if BootGod will be willing to share, or at least make distributed backups of, the sql database that was underneath.
Oh yeah, living and learning Koitsu.
I've always suspected of many misleading information on these HDD forums... You confirmed my theory!
I thought that testdisk didn't save anything on the "sick" hard drive unless you tell it to do so. I'll take extra care from now since doing this is like trying to quench your thirst with french fries.
Thanks for showing me all this!!
About Bootgod's destiny, I gave a suggestion and it seems possible to be done, I've seem the wiki defaced a couple of times and moments later it was ok again. This shows that Tepples' backup routines are working fine. In the end, it's up to Bootgod to decide what's better and what to do.
The name, as Koitsu said, doesn't matter that much, it's really the content that does!!
I was actually trying to access the NES Cart DB right now, as part of my slow learning about mappers, NES/FC PCBs and cart making. After searching a bit I ended up back on this forum, reading the bad news.
It seems to be a very valuable resource for the NesDev community, and I'm happy to donate a bit if this recovery service works out.
As for the rest, and to paraphrase others, redundancy of such material should be something to think about once everything is back online. Web scraping is good, but ideally if that's possible, the source files and database dump could be made accessible via http for others to pull, maybe considering something like git. There are many other options (rsync server would indeed make it simpler than a DVCS repos and optimise bandwidth usage).
Regarding hosting, OVH offers a barebones service called kimsufi that offers dedicated servers starting 5€. Pros: 500GB disk, 5TB traffic, excellent uptime. Cons: you're on your own, the 5€ box is not very powerful (but FreeBSD runs wonderfully on it).
Good luck!
Careful use of Google's cache can get you at some of the data, just not the images, if you know exactly what to look for. But, it's very inconvenient. I eagerly await it coming back online.
Hi bootgod,
I have sent you a PM. I can offer you a free VPS hosted in the Netherlands. Let me know if you are interested.
Regards,
Mathijs
Just wanted to post an update, I didn't manage to ship out drive yet before I left for a long overdue vacation. I just got back tonight, I will get it shipped out for sure in the next couple days. I've gotten floods of support for both hosting and for helping out with recovery costs. I really appreciate all of the community support both here and from other sites.
Rest assured, once this gets straightened out, I will make sure others have a way to get backups of the database so if something like this was to happen again, multiple people would be in a position to recover it.
People have often requested an offline version of the site but I've never quite known how exactly to go about that. Of course it would be incredibly easy to make static versions of the profile pages, but how do you tie it all together? I feel like the most useful aspect of the site is the extensive search capabilities and results formatting and all of this stuff is done server-side.
Anyways, once I hear back from recovery service I will post an update!
Thank you everyone for your patience!
Excellent news today! I sent in the drive earlier this week, they received it yesterday, and already today they are telling me they can recover 100% of what I need and sent me a file manifest of the contents of the drive and it looks like everything is there! So long as the transfer goes smooth, I would expect to have the data back by mid next week!
Client-side or static page stuff is possible with javascript, it's just that a big database file would need to be sent to the client to filter it or search it. It might be possible to reduce the data transfer needed.
BootGod wrote:
Excellent news today! I sent in the drive earlier this week, they received it yesterday, and already today they are telling me they can recover 100% of what I need and sent me a file manifest of the contents of the drive and it looks like everything is there! So long as the transfer goes smooth, I would expect to have the data back by mid next week!
Hello BootGod.
Is there a way to donate some funds? Paypal preferably? I've been depending on your database for years now! Its one of the most well made and thorough tech resources I know of of. Again hats off to you!.
Hi, I'm a long time lurker that just learned of this due to searching for answers as to why the database was down. I registered in order to post this actually. I have used your site for some time and would also love to help donate some funds ( Do you have a paypal registered email? Would that work for you?). I'm very sorry to hear about these unfortunate events, please do keep up your amazing work!
I got all the data back, everything was recovered thankfully. It's so close to being back online but I'm running into one snag and it's really got me stumped. Something is going seriously wrong with PHP. Very shortly after scripts start to execute, it will crash, restarting apache as well. In the apache error log I get either "zend_mm_heap corrupted" or apache will just terminate with some random error code. It has nothing to do with the script running, it will just crash at some random point unless the script is very short.
The most puzzling part is i'm using the exact same versions of everything that I was before, with the exact same configs from before so I'm at a loss as to why this is happening. Any ideas anyone?
The final damage was $390 for the recovery + new HD. My paypal address is prlacey_6942 AT hotmail DOT com if anyone wants to help out, I would definitely appreciate it!
Hi BootGod,
The error apparently is because a bug in php [1], reading more here [2] talk about set "USE_ZEND_ALLOC=0" on php config. Maybe you can try that, or try upgrade the *AMP install, obviously in a separate server to keep the original untouched.
Cheers.
[1]
http://drupal.stackexchange.com/questio ... mmatically[2]
https://developers.oxwall.com/forum/topic/41865
I second the suggestion to try newer PHP (especially PHP 7) on a different server.
BootGod wrote:
The final damage was $390 for the recovery + new HD. My paypal address is prlacey_6942 AT hotmail DOT com if anyone wants to help out, I would definitely appreciate it!
Done!
Long time lurker here. Great news Bootgod!!! Can't wait for the site to be back up.
I was also able to put together a mini version of NESCartDB based off one of your XML files. It's def not as comprehensive as your site but I streamlined it for anyone looking for a quick donor list. Also it's mobile friendly.
http://mimeones.com/nescartdbHope you guys find it useful and suggestions are welcome!
dubcl wrote:
The error apparently is because a bug in php [1], reading more here [2] talk about set "USE_ZEND_ALLOC=0" on php config. Maybe you can try that, or try upgrade the *AMP install, obviously in a separate server to keep the original untouched.
I did try that actually, but the server will still crash none-the-less, just without that error message :/
tepples wrote:
I second the suggestion to try newer PHP (especially PHP 7) on a different server.
Easier said than done unfortunately. The code for the public version of the site is ancient and heavily makes use of the old magic quotes "feature" (I didn't know any better at the time). Currently site is using 5.2.9-2, magic quotes is still available I believe in 5.3.x but was removed after that. I did try popping in a version of 5.3 and with that, apache wouldn't even start up (and gave absolutely no indication as to why).
I had been working on a new version of the site that's much cleaner code-wise and uses a newer PHP and no magic quotes nonsense. The database structure for the new site is completely different than the old one and the data is way out of sync. Not looking forward when the day comes to re-sync them either! In any case, the new version is not ready to be made live.
I'm sure I'll figure out something to get old one back up and running though.
Also thanks everyone for donations so far! I'll be sure to give credit on the site once its back up and running.
mimeoNES wrote:
http://mimeones.com/nescartdb
Hope you guys find it useful and suggestions are welcome!
Permalinks please. Could you make result pages have URLs, either through fragment manipulation or through history manipulation? I wanted to link "list of carts you could use to make
Wheel of Fortune" in
this post, but the URL in the location bar still points to a blank form.
I can organize a mirror at my home server.
Do you need it?
Can the NES Cart data be made available as a SQLite database?
Or, if that's too much trouble, at least as a MySQL dump so that one of us can do the conversion as needed.
zzo38 wrote:
Can the NES Cart data be made available as a SQLite database?
Please god no. The internal DB format tends to change and be non-backwards-compatible quite often. We deal with this mayhem in FreeBSD regularly (once a year usually), because the package tools rely on SQLite 3.x as a database.
Let me educate people: if you want to export a database into something that people can use themselves,
what needs to be provided is a text-based format. I don't care if it's
mysqldump with certain options disabled, XML with or without a DTD, or even a bloody CSV. UTF8 output would be great, but even ASCII (with
\xXX for chars outside ASCII range) would be fine.
Also, people putting up "mirroring sites" of this: if you think you're just going to run some kind of recursive
wget --mirror then I will remind you as the guy who ran Parodius that this is
EXTREMELY RUDE AND UNRELIABLE. Don't do this. Instead, work with the site owner/admin (BootGod) and get something that doesn't scrape an entire site. There are TONS of ways to do this that don't destroy resources like a "website mirror" would.
I wish people would let BootGod get the thing back up and working first. He just spent US$390 out of his own pocket on data recovery. One step at a time, and please have some patience!
koitsu wrote:
zzo38 wrote:
Can the NES Cart data be made available as a SQLite database?
Please god no. The internal DB format tends to change and be non-backwards-compatible quite often. We deal with this mayhem in FreeBSD regularly (once a year usually), because the package tools rely on SQLite 3.x as a database.
Ironically,
they say that using SQLite's file format as your own is a good idea (going as far as stating "Content stored in an SQLite database is more likely to be recoverable decades in the future"). Either something is wrong on your end, or something is wrong on theirs. (・_・)
EDIT: also before I forget, text based formats (at least when they have newlines) have the advantage of being easy to compare with a diff, and even to merge two files that were updated separately. Doing that with a binary format is a pain in the ass.
Sik wrote:
Ironically,
they say that using SQLite's file format as your own is a good idea (going as far as stating "Content stored in an SQLite database is more likely to be recoverable decades in the future"). Either something is wrong on your end, or something is wrong on theirs. (・_・)
It's definitely not my end, not when I see commits coming across that mention such matters. The documentation you linked makes me chuckle, especially when you consider that to use said database, you need to use something that links in to their API -- which is regularly changing
in incompatible ways. (Yes, that's not the file format, but it's equally as important)
The two "biggest" projects I know of which use SQLite exclusively are
pkg and
ircd-ratbox, and I have already seen several reports of "weird SQLite database corruption" with pkg (these PRs have gone completely unanswered/ignored, which never sits well with me in an open-source projkect) -- but I will say that bugs in pkg are certainly possible. It's also possible that the problems I read about were related to the filesystem used as the backing store; SQLite, unlike other databases, relies exclusively on filesystem-level locks (which on some filesystems, including NFS, are known to be somewhat problematic; don't get me started on lockf vs. flock vs. fcntl).
My point here is this: don't just copy around a binary database file, even with SQLite. Either use
.dump or
a CSV export.
And yeah, text is good for lots of reasons, and you covered one of several. :-)
koitsu wrote:
especially when you consider that to use said database, you need to use something that links in to their API -- which is regularly changing
in incompatible ways.
This seems to be true of all major libraries these days, sadly. One of the reasons I hate
semantic versioning, people use it as an excuse to break old stuff that was otherwise working perfectly fine. When your library gets so cluttered that the only reasonable way to keep going forward is to break existing programs*, then maybe it just means the design is at its limit and you should consider starting a new library from scratch,
with a new name, and leave the old one to be maintained by those who still rely on it.
I can see the idea behind semantic versioning but I'd like to get rid of its major version (effectively leaving just two numbers), which would leave no room for breaking backwards compatibility. People underestimate just how important it is to keep old things working.
*Ignoring those that were doing something that wasn't valid in the first place and hence were working just by pure luck =P
OK, then make it as CSV (although using tabs to separate would probably be better than using commas, I think).
zzo38 wrote:
(although using tabs to separate would probably be better than using commas, I think).
Until somebody does something dumb that results in tabs being replaced by spaces =P Although it's feasible I suppose.
My day job involves "plumbing" (data interchange engineering) for a local R/C car shop's online presence. Amazon prefers tab-separated files, while eBay prefers comma-separated. I've noticed three practical differences between the two:
- In order to store a value containing the delimiter, you have to quote the value. This involves replacing all instances of " with "" and then surrounding the value with " on each side. Tab-separated files need far fewer fields to be quoted than comma-separated because a comma appears in things like product titles and manufacturer names more often than a tab. Mainland Europe often uses semicolon-separated files so as not to need to quote a decimal comma.
- File associations in GUI file managers differ. If the user has a spreadsheet app installed, double-clicking a comma-separated file is more likely to open in the spreadsheet app, while tab-separated files are more likely to open in a text editor. One often has to manually associate .txt with LibreOffice Calc or Microsoft Excel in the "Open With" menu, while .csv gets associated by the spreadsheet app's installer.
- But spreadsheet apps use tab-separated data on the clipboard, meaning data in a tab-separated file can be copied and pasted between a text editor and a spreadsheet. This allows, for example, using RDP's clipboard support as crude file transfer.
I've made some of my tools work with both by reading column headings from the first line of the file, counting commas, semicolons, and tabs in that line, and assuming the delimiter is what there are the most of. Then it reads the rest of the file using that delimiter.
As for SQLite, even when it makes incompatible changes to its C API, Python's
import sqlite3 somehow seems to paper over these changes.
tepples, yes I can have the results open in a separate page via query string or something. That way you can pull up the results without having to re-do your search. Thanks for the feedback. Will keep you posted on when this update is made.
Being able to search by board name, mapper chip name, and mapper number are the use-cases I tend to have most, not game name. (They're also mostly the ones the wiki uses.)
@tepples, I have changed the results to show on a separate page so you should be able to bookmark and direct link now. After selecting the game from the search page, it will redirect to the new page. For the Wheel of Fortune example:
http://mimeones.com/nescartdbdonorlist? ... bile=false@Myask, I will add more search options where you can search by any of the fields listed as well.
Thank you for the feedback!
Is there an issue tracker for
http://mimeones.com/nescartdb? Should I report issues here, in a new topic, or somewhere else?
tepples wrote:
Is there an issue tracker for
http://mimeones.com/nescartdb? Should I report issues here, in a new topic, or somewhere else?
Hi @tepples, you can send me a PM or email me at
mimeones@gmail.com and I'll be more than happy to take a look. I will also open up a new thread for feature requests, enhancements and bugs.
@Myask, I have added some additional search filters under Advanced Search. The only other field that's left from the database is iNES mapper number. If that is helpful, I can add that too.
Thanks!
Any updates? There's a NesCartDB-shaped hole in my heart.
Same here, that hole has to be filled again. Hope it's going to be up again and running one day.
mimeoNES's page is a great alternative and helped me with my first at making a few games. Thanks for doing that mimeoNES!
Looking forward to the return of the old site
Thanks for telling us about mimeoNES - that's very helpful.
Is there any way that we could get the original nescartdb database in downloadable form? I didn't realize how heavily I rely on it when I am on the hunt for donor carts for various projects. It is really useful to be able to see inside of the carts and how much extra room is inside, and if there is W-RAM present. I have no idea any other way to tell if W-RAM is present, as the iNES header does not accurately provide this information.
Ben Boldt wrote:
I have no idea any other way to tell if W-RAM is present
For US releases, you can look up the board names
here, and then check the specific page of each board.
Another way to check for the presence of WRAM is to open the game in FCEUX and set up a breakpoint for accesses to $6000-$7FFF, which is the range where WRAM is mapped when present. If the game constantly writes to and reads from that space, its cartridge certainly has WRAM.
I haven't checked too thoroughly, but archive.org
seems to have most of bootgod's DB backed up, save for the most recent uploads. Search doesn't work, though.
Still want the original back, I have stuff to upload!
Thanks for the info tokumaru and Skrybe!!
Site is back up finally! Sorry it took so long! :/ If you run into problems, please let me know.
Also, if you donated, please PM me with whatever name you'd like to be credited with.
BootGod wrote:
Site is back up finally! Sorry it took so long! :/ If you run into problems, please let me know.
I am glad to hear you have it back up. The offer to host the database for you still stands, or even if you wanted us to host a mirror.
O FRABJOUS DAY!
This calls for celebration, so good to have NesCartDB back! Thanks a ton bootgod!
Thank you.
Going forward, I'd like to make some sort of periodic backup of the XML and images to help keep this from happening again. Will there be a way to go about this without angering anyone or getting blocked?
rainwarrior wrote:
O FRABJOUS DAY!
Calloo! Callay!
(suggest: append to title ", no longer")
I get an error on searches, like this one:
http://bootgod.dyndns.org:7777/search.php?unif=HVC-RROMQuote:
Error: Query Failed!
Reason: Access denied for user 'stats'@'localhost' to database 'website'
Search terminated due to malformed query!
Then sometimes when I reloaded it, it worked fine.
edit:
Also sometimes getting errors on page views:
http://bootgod.dyndns.org:7777/profile.php?id=3611Quote:
Error: Query Failed!
Reason: SELECT command denied to user 'stats'@'localhost' for table 'tbl_carts'
Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in C:\Program Files\Apache Software Foundation\Apache2.2\htdocs\profile.php on line 130
Could not find CartID 3611 or it is disabled.
edit again:
Seems to work okay for now, but if it breaks again I'll let you know.
Just want to say it's great to have this working again!
Thank you so much for bringing back this amazing resource!!
I'm very happy about this! Can we expect the software section to be updated at any time? Are the regions also updated? Is there an additional method for users to submit dumping plugins?
I'm curious to know the status of the as yet undumped list of Famicom carts. There doesn't seem to be much progress on getting those bought and dumped? Are they too expensive or something?
I assume it's still volunteer ad-hoc community work, and we haven't got a member in Japan with money to burn, motivation, and equipment.
Down again, with "access denied" all over the place.
tepples wrote:
Down again, with "access denied" all over the place.
WFM
Seems fine on my end as well. Maybe it was just being a little sporadic earlier?
tepples wrote:
Down again, with "access denied" all over the place.
It was reported to me, through a channel that I don't remember with certainty but was most probably #nesdev on EFnet, that NesCartDB was broken. I checked it out, and the site was having issues with its MySQL database: the application server was getting "access denied" from the database server when trying to
SELECT from various tables. It appeared as "access denied" because the application server was set up to echo PHP warning messages to output instead of logging them privately or converting them to exceptions. This problem was present at the time I posted the previous comment but has since cleared up.
FitzRoy wrote:
I'm curious to know the status of the as yet undumped list of Famicom carts. There doesn't seem to be much progress on getting those bought and dumped? Are they too expensive or something?
It's mostly a money thing for me, I don't have quite as much disposable income as I used too. Also, when I first started dumping FC stuff, I was buying large bulk lots for very cheap, at this point I pretty much have to get them individually which adds up quick.
tepples wrote:
Down again, with "access denied" all over the place.
I'm aware this happens occasionally, I think it has something with concurrent connections to DB being maxed out because for some reason some sessions aren't closing properly. The problem fixes itself when the rouge sessions timeout. Could anyone who has experienced this error tell me how long the problem persisted?
BootGod wrote:
tepples wrote:
Down again, with "access denied" all over the place.
I'm aware this happens occasionally, I think it has something with concurrent connections to DB being maxed out because for some reason some sessions aren't closing properly. The problem fixes itself when the rouge sessions timeout. Could anyone who has experienced this error tell me how long the problem persisted?
Here are a couple things you can try to make these problems occur less often.
Nowadays it's best practice to
convert all PHP warnings to ErrorException:
Code:
function exception_error_handler($severity, $message, $file, $line) {
if (error_reporting() & $severity) {
throw new ErrorException($message, 0, $severity, $file, $line);
}
}
set_error_handler("exception_error_handler");
This way, failed connections will cause errors that you can
catch. Failing fast reduces how much load a failed connection puts on the database before it times out.
Another is testing the database connection before trying to render anything. Often you can test the connection by trying a simple query that should always work, such as looking up global settings for the application or data associated with a user session. If this fails, raise an appropriate HTTP status code such as 503 Service Unavailable. More generally, getting in the habit of doing all the database access before producing any output will help you separate functions that access data from functions that render it for display, which can make the architecture cleaner and easier to maintain.
Discuss the latest downtime in
#16325