The subject says "framebuffer", which to me means "a completely linear piece of memory". That rules out bitplane-based models.
Like samophlange, I thought of the standard VGA model on the PC, where you have these two choices:
* Standard VGA mode 0x13, which is 320x200 and 256 colours
* Mode X, which is 320x240 and 256 colours -- offering visually square pixels
* Mode Q, which is mode X but "tweaked", thus 256x256 and 256 colours -- many DOS NES emulators used this (I speak as the author of one!)
All these result in a linear framebuffer of sorts: a single byte of memory represents what pixel (in a palette index of 0-255) to visually show on screen. In real mode, the data is in segment 0xA000 or A000h; in protected mode it's 0xA0000-AFFFF or A0000h-AFFFFh.
Example code.
All of this still works on PC architecture today, but of course present-day OSes etc. now inhibit you from doing anything with that memory (the last OS to let you play with it directly was, IIRC, Windows XP for specific DOS applications, using
NTVDM).
I suspect the below answers are not what you're asking for/about, but I'll mention them anyway:
If you're wanting something that works with a present-day OS, I'm pretty sure the way this is accomplished (in Direct3D) is by setting up two triangles that thus make a square, and then mapping a texture across both of them -- your linear framebuffer then is that texture memory, which you can access directly IIRC. There's also
using a locked RECT surface in D3D9. I believe there's a way to achieve this in Vulkan as well.
As for audio: good frickin' luck! I don't know of very many audio systems that worked that way to begin with. If what you're talking about is akin to, say, the old PC/DOS days with sound cards where there was a linear piece of memory your code was expected to update + tell the soundcard to play it (plus keeping track of where the card was in the buffer) -- and the hardware simply plays (and will loop) what's in the buffer -- AFAIK that is still the same model used today with DirectSound and other audio APIs.
In short, the "old way" of doing things -- the way that's simple and makes a ton of sense, gives the programmer the most control, etc. -- are going the way of the buffalo, replaced by layers and layers of abstraction, complications (no guarantee your code runs at a specific time/interval/whatever), and so on. A lot of this has to do with the fact that people's computer needs changed during the 90s and 2000s, alongside the introduction of multiprocessing. In exchange for the latter, we had to give up a lot.
Edit:
Oziphantom mentioned the Apple IIGS, so I'll talk about that. There's no video hardware acceleration of any kind of the IIGS, except maybe fill mode (you decide):
The IIGS's graphics are a linear framebuffer... kind of. The memory map looks like this:
Bank $E1, $2000-9CFF: pixel data
Bank $E1, $9D00-9DFF: scanline control data
Bank $E1, $9E00-9FFF: palette data
Pixel data represents the literal pixels of the two main video modes (320x200 and 640x200). In 320x200 each pixel is represented by 4 bits (16 colours), and in 640x200 represented by 2 bits (8 colours). I don't want to talk about 640x200 (it's a mess). There is no way to increase the vertical resolution. So on the IIGS, there's always a limit of 16 colours per horizontal scanline, but there's a nuance:
The scanline control data is a 200-byte area that controls what palette each individual scanline uses. Each byte represents a scanline (e.g. $E19D00 = scanline 0, $E19D01 = scanline 1, up to scanline 199). The remaining 56 bytes are unused. The format of the data is:
Bit 7: Graphics mode addressing (0=320x200 mode, 1=640x200 mode)
Bit 6: Generate IRQ when scanline drawn (0=off, 1=on)
Bit 5: Fill mode (0=off, 1=on; only available in 320x200 mode)
Bit 4: Reserved, must be 0
Bits 3-0: Palette number (0-15)
I forget exactly how bit 7 works but again I don't want to talk about 640x200 anyway.
Bit 5 / fill mode is kind of weird but useful in some situation: if the bit is set, any pixel with a value of $0 re-uses the colour of the preceding pixel on that scanline. So if pixel 0 was a value of 4, followed by 9 pixels of value 0, those 9 pixels would show value 4. Also, with fill mode, pixel 0 of that scanline can't be 0, otherwise you get weird behaviour. Many demos used this to achieve certain stuff, like giving the impression of full-screen horizontal text scrolling but with nothing else really visible (there's a demo that did this whose name I forget, else I'd find it for you).
The palette data is 512 byte area that defines what RGB values each pixel of the pixel data displays. There are 16 palettes, with 16 colours per palette; ex. palette 0 = $9E00-9E1F, palette 1 = $9E20-9E3F, etc.. The format of the palette data is as follows, with R/G/B being given 4 bits each:
Bits 15-12: Reserved, must be 0
Bits 11-8: Red
Bits 7-4: Green
Bits 3-0: Blue
So, you tell me -- does that count as a linear framebuffer or not? :P
The IIGS's graphics are kind of weird because of its memory map and backwards-compatibility with the Apple II and its bank shadowing (mirroring) capability and why certain memory accesses are at 1MHz rather than 2.8MHz. I started doing a write-up of it all from reference material, but then about 30 minutes in Googled and found that
Eric Shepherd and
Andy Mcfadden already explain it, along with just how unexpectedly powerful the IIGS's shadowing registers ($C035 and $C036) are. Eric talks about it but also uses GS/OS toolkit functions (ex.
NewHandle) which confuses things a bit, but don't worry about that -- while Andy goes right for the guts. Both of them explain tricks that are used to achieve fast ways to copy data around, like relocating direct page and the stack and using
pei copy data quickly (when mirroring bank $00 to $01, which then effectively let you map DP and the stack onto graphics memory). The
nop-intermixed-with-
pei thing at the end of Eric's reply is
primarily a IIGS-specific thing (and even why people often say the IIGS runs at 2.6MHz rather than 2.8MHz). (Maybe me posting all of this gives some insight into why I talk about things like relocating DP all the time...)
All this brings me back to old times, and even reminds me why I loved PC graphics over Apple IIGS graphics so much: 16 colours per scanline really sucked for decent images. While the IIGS's palette-per-scanline thing is neat (and you can actually do a lot of really cool effects with this), you still couldn't display a GIF without applying dithering and so on to try and "make up" for it. The "peak" of displaying still images on the IIGS was what was called "3200 mode", where programs like DreamGrafx and
Convert3200 by extending the palette count from 16 to 200 (one palette per scanline) purely by trickery in HBlank and tweaking palettes in real-time (200 scanlines * 16 colours = 3200 colours). You literally have *no* other CPU time available for anything -- I'm not exaggerating in the least -- displaying pictures this way... and you're *still* limited to 16 colours per scanline. I often thought the 3200 mode thing was a farce, and felt incredibly "Apple" in its modus operandi (kind of like false advertising), and was certainly invented out of spite towards PC and Amiga. All you had to do was give a IIGS person a GIF with a >16 colour gradient on a single scanline and laugh; but if the gradient was vertical, yeah, the results look better than a PC. Meanwhile, the Amiga with its HAM mode sat laughing at everyone.