I decided to split this topic...
From ...
viewtopic.php?f=22&t=12784&start=150#p178439rainwarrior wrote:
You mentioned back ups, but it sounded like you mean you were doing this manually? Have you ever tried source control systems like subversion or git? I find them invaluable for any kind of software project; they seem almost as essential to me as a text editor at this point. (You get back ups, tools for examining differences, and complete history of changes for every file, all rolled into one service.) There are lots of free private subversion/git hosts out there that you could try this out with.
tepples wrote:
In order for it to be "backups", your repository has to be stored somewhere other than your own computer. When developing RHDE, I know I backed up the source tree daily to a flash drive; I forget if I'd already been using Dropbox by then. A daily backup regime can be thought of as a very primitive, coarse-grained version control system, as you can always unzip a previous tree and diff them.
As for real version control, such as Subversion or Git, that's an excellent choice for free software or other projects whose source code is public. Perhaps I haven't looked enough, but I know GitHub charges a monthly fee for hosting private repositories. You could instead lease a virtual private server (another monthly fee) and set up a Git server.
rainwarrior wrote:
There are lots of free private repository hosts, there's no need to set up your own. I've been using assembla for years, but there are many others.
I backup on an external hard drive, and on Dropbox. I don't backup that often. Another project I've been working on for 6+ months, I think I've backup-ed about 20 times. That's about once a week.
And, by backup, I mean I zipped the source files and saved the .zip
For small things like NES games, I upload an archive at the end of each day I work on them to Google Drive. I often keep the code files separated from larger things, such as graphics, design documents, tools, and so on, because those aren't updated as often, so they don't need to be backed up as frequently, which helps save some space. Before there was Google Drive, I just emailed the backup files to myself.
I use git and subversion for work stuff, because working with a team demands that, but for my own personal projects I can't be bothered to set those services up, and you just can't beat the simplicity of daily archives.
I use an NAS (Synology DS213j specifically) with a sync setup to keep my stuff backed up immediately. Just my projects and stuff that matters. And even it has 2 disks in it, so if one of those fails I'm still good (hopefully). What's really convenient with this is I have it set up on my laptop as well. So I can write some code on my laptop, and next time I connect to my network it gets backed up and puts my changes on my main PC.
I also occasionally burn a backup DVD.
A while back (maybe 5 or 6 years ago), I actually got hit by ransomware on my PC. Thankfully, it was weak stuff, all it did was mark my files with the hidden and system attribute. Uh, yeah that trick is not gonna work on me.
That was the only time I'd ever had a virus, one that had a noticeable effect, anyways. Well maybe not, there was a time when I didn't know any better and used Outlook Express, but that was just an email worm (self-propagates and does nothing else really). But yeah that ransomware is pretty scary stuff, those can encrypt your files and then you're screwed if you don't have a backup.
For my NES projects I also use TortoiseSVN with a local repository (actually I recently moved it onto the NAS), it's easy to use and the benefits are just awesome. Easy to compare what you changed, go as nuts as you want when changing your sources without having to bother with manual backups.
Thread in question, despite filed under NESdev, seems to be about "how to do backups of your NES projects". I didn't read the thread in full, so don't chastise me too much. Lots of free options are available to you, in no particular order:
1. Use some kind of version control system, e.g. git, and push the code to github (or anywhere else that offers private repos). The downside to this is that it requires you get familiar with git. Subversion, Perforce, Mercurial, etc. are other options. Also, GitHub and many repo providers often dislike it if you store tons and tons of binary data/files in git in a repo (i.e. gigabytes would really tick them off. I know this from experience at my previous job),
2. Use services like Dropbox, Google Drive, or Microsoft OneDrive. Copy the files over periodically whenever you want. Another provider is a place like rsync.net, where for a little a money you can stick data of your choice on their stuff using rsync or other protocols. (I'm a big fan of rsync (the utility))
3. Do classic backups: back up your drives, directories, whatever you want, to, say, a USB flash drive, USB hard disk, spare hard disk in your PC, another PC on your LAN, or whatever.
I use a combination of #1 and #3. I'm very picky about when I use #1, because I don't like sticking my own private data on other people's stuff. I mainly use #1 for public open-source projects that I maintain/own. I use #3 (a couple different types of #3 simultaneously, actually) because I prefer owning the equipment -- I bought it, it's mine. If it goes bad, it's my problem. I simply don't trust "cloud providers" for this sort of thing; my data = my responsibility.
One final data point: I cannot stress the importance of this enough: TEST YOUR BACKUPS (I.E. DO A RESTORATION). I cannot tell you how many times, both personally and professionally, I have seen people deploy "backup solutions" that completely and utterly fail them when it comes time to restore -- but by that point they're already up shit creek and are now fighting two fires (loss of data combined with fighting with their backup/restore system).
I personally just use "classic" backup, copying my data to various medias stored a bit everywhere in the house. In case of fire I could very likely be rescued holding the backup media in my hand, and I don't live in a sysmic area, so this is safe enough, I guess. Not 100% safe, but something like 99.9%.
I cannot stress enough the last part of koitsu's post. A non-working backup is not only useless, but actually dangerous as you think there's a backup you can trust when it fact it was nonworking.
I host my own version control (although this is different from making a backup), although I should make backup on DVD probably, and/or on other medias. I will need to figure out best way to set up. Verifying backup does help, and the backup program could be made to automatically to verify (in DOS and Windows I think the VERIFY command does this; I am not sure if there is something similar on Linux, but even if there isn't, there are other ways to make it to automatically verify). (Verify of course won't work if you are backing up the wrong data, although you could check that manually.) I also have a hard drive from my old computer with old backups of anything I had on that computer. For public data, making external backups would also help; anyone who is interested in my programs can make their own mirrors of them (someone already mirrors my TAVERN project).
koitsu wrote:
One final data point: I cannot stress the importance of this enough: TEST YOUR BACKUPS (I.E. DO A RESTORATION). I cannot tell you how many times, both personally and professionally, I have seen people deploy "backup solutions" that completely and utterly fail them when it comes time to restore -- but by that point they're already up shit creek and are now fighting two fires (loss of data combined with fighting with their backup/restore system).
This is so important. Triple-check your backup system and the backups themselves.
I'll tell a little horror story:
On my primary computer I have a 1TB HDD which mounts to /home. While I boot off of an SSD and store the OS and programs there, my user data is on the large mechanical drive.
I thought it would be prudent to, in addition to remote version control, make periodic "classic" backups to another device. So, I got another 1TB HDD as a backup drive. I installed this, and mounted it to /backup, and took ownership of that directory. Great! I rsync'd my home folder to /backup after formatting the new 1TB HDD, and it looked great. Finally, I edited /etc/fstab to make the drive mount on boot.
A month later, I chose to do another backup. I used rsync once more, having it only copy files that had changed at all or were new. Well, after a few minutes things got a little slow, and the HDD indicator light was going nuts. I decided to look more carefully at the stuff scrolling by in the terminal (CTRL+S'd it to make it stop, to later resume with CTRL+Q). I had not paid much attention to it, since I ran rsync with --verbose, but now that I looked carefully I saw they were all errors. Nothing was being copied!
I looked at fstab once more and realized I hadn't set up my backup drive to mount to /backup at all, but had instead double-mounted my /home drive to /backup. I was writing my home folder back to itself.
In the end, nothing was lost, but if I'd been relying on this system without paying attention I'd have no meaningful backups.
tokumaru wrote:
I use git and subversion for work stuff, because working with a team demands that, but for my own personal projects I can't be bothered to set those services up, and you just can't beat the simplicity of daily archives.
Oh man, I find them super-useful for my hobby stuff also. Being able to quickly bisect my history (using git bisect, or doing it manually) to figure out when an how a bug was introduced? Being able to have a couple of branches to test out features, and have a good interface for managing that? I'd be so annoyed to have to go back to development without proper source control.
One thing that I like to do with version control is set up the ability to recreate any ROM on a fresh box (habits from work die hard).
I'm not sure how much art you are generating, but if you're using version control for a single project, I am pretty sure that GitHub will not complain no matter how much data you use.
For everything else there are symlinks and DropBox, I suppose.
tokumaru wrote:
I can't be bothered to set those services up
Setting it up is just:
1. Filling out a registration form on a website that offers free private SVN.
2. Installing TortoiseSVN.
3. SVN Checkout your repository.
It's pretty similar to the amount of work that it takes to set up Dropbox, really. Also, you only have to do it once, you don't need a separate repository for every single project. Not for a private personal-use one, anyway.
It also solves the issue of keeping your backups in a different location, though it does not solve the "I don't trust the cloud with my stuff" issue, if that's a concern. If you do have a server of your own, installing an SVN repository host on it is slightly more work, but still not terribly onerous.
As for regular backups, a couple of people have mentioned rsync, and I'd like to also cast my favour to it. I tried a lot of different backup programs, and was frustrated by all of them; I was about to write my own backup program in frustration, but then I found out about rsync. It was able to do exactly what I wanted. (i.e. incrementally copy files from one location to another, prune deleted files, be able to have an "ignore" list, do things in the correct order, etc.)
I use rsync too, to a normal external HDD. I originally set it up to sync data between two computers, back when I still had a desktop, which was a great way to make sure it was copying everything. Had my first total hard disk failure about a year ago, only lost some nonessential data I had intentionally not backed up.
I haven't solved the issue of the backup living three feet away from my computer though. It would take more than two years to copy everything over my internet connection (dead serious: 7 GB/mo and 200 GB of data), so that's not really an option, except for the really important stuff.
rainwarrior wrote:
Setting it up is just:
Well, for me it's not really just the work of setting it up, it's also the fact that I don't work exclusively on my own computers... I sometimes have the chance to work on my projects using other people's computers, and with my current setup I can easily download ZIP files with everything I need and start working right away, without having to install or configure anything on anyone's computer.
I also format my own computers a lot, and I hate having to set things up every time I do it. This is why I almost exclusively use portable applications, which are permanently available in a separate partition of my hard drive, so I don't need to waste an entire day to make my computer usable again.
Hmm, well I mentioned TortoiseSVN because I like it, but there do exist portable versions of subversion and git that you could use instead.
I feel SVN in any form is dated and should not be used for new projects. I mostly use GitHub for Windows with a little bit of TortoiseGit.
I use github.com for projects that I don't mind being public, and bitbucket.org for projects that I don't want to be public yet.
I should add that git is very useful above and beyond its use as a backup system, and a version control system like git is a must-have if you ever collaborate with someone.
furrykef wrote:
I feel SVN in any form is dated and should not be used for new projects.
SVN is roughly the same age as Git, actively maintained, and widely used.
I mean, I'm aware of the ongoing
holy wars about SVN vs Git (which I largely don't care about; I use both), but I'm kinda thrown by it being called "dated"?
Do you mean CVS, which was discontinued years ago?
SVN is most definitely not dated. The FreeBSD project
actively uses it as their sole/primary VCS (there was a debate over which VCSes to use, including git, and SVN won), and has since roughly 2009.
It may be that you're thinking of RCS or CVS, however both are still used in many places as they work fairly well for purely local simple projects. (I myself used CVS up until about 2013).
No, I'm not confusing it with RCS or CVS. I've used SVN myself; in fact, I was responsible for getting a game company I worked for to use it. (Prior to then, they were using no version control at all!)
But git seamlessly handles things that are a pain in the butt in SVN. I don't know, maybe SVN has caught up since then; I last used SVN maybe six years ago. I also think distributed version control is inherently superior to centralized version control for small projects.
But what really makes git great is github. For example, I can patch somebody's software just by checking out their repo, making my changes, and submitting a pull request. No need to make any patch files, submit any bugs on the bug tracker, or anything like that.
Another alternative to Git or Subversion is Fossil, which is what I use.
After having Derek twist my arm at the beginning of the year trying to get me to do something like these, and now reading how much people say it is helpful, I went and joined Bitbucket. Figured out how to do my first commit and then push, so I guess we'll see how it goes!
furrykef wrote:
I don't know, maybe SVN has caught up since then; I last used SVN maybe six years ago.
SVN is actively maintained, and it's been steadily (if subtly) improved over the years. The interface is mostly the same as it has always been, but a lot of things about it work better than they would have been several years ago.
It will never be a distributed system, but I consider that a bit of a wash. There's things I like about distributed version control and things I don't. There's a lot of things I like and hate about both git and SVN, really, but they're both pretty practical for a lot of uses.
I've used both svn and git. I prefer git, and I think it's better for large projects with many developers, but for a single developer's work I don't think there's much reason to prefer one over the other.
It's so refreshing to see reasonable statements about git vs SVN here. I'm so tired of the "SVN sucks because it's not distributed" line. There's things git does better, but the dogma and/or groupthink about it drives me crazy.
I think half the kids complaining about SVN have never actually used it. Or they think GitHub == git.
Hmm. I thought git = Github.
Thanks for enlightening us/me.
git is a VCS (version control system).
GitHub is a service provider for git repositories and a web interface for managing those repos (incl. access control), as well as things like making pull requests easier, and even providing
support for SVN/Subversion. There are *tons* of providers that do the same thing GitHub does; which one suits needs/tastes/etc. varies in the eye of the beholder.
You can use git without a service provider (i.e. no HTTP, SSH, etc. -- just pure files/directories): see "Local":
https://git-scm.com/book/ch4-1.html