How practical would it be to write a sound engine that wrote to the APU registers 2 times per frame instead of one time per frame? I think it would be interesting with an engine like this, maybe more depth could be added to the APU channels, like for frequency changes, or duty cycle changes for the square channels, or volume changes. For 2 times per frame, maybe a game fires a mapper IRQ near the middle of the frame, or as close as it can get if there's split-screen effects going on there. Maybe a game could use a large status bar like Kirby's Adventure and do mid-frame APU updating there... Could 3 or 4 times per frame also be practical?
What if a game did its own manual sample mixing for $4011 and mixes two sound channels together once per frame? I think such sound channels would need to be split up in one frame segments (50 or 60 HZ segments). When mapper IRQs are being fired at about every 2-3 scanlines or lower frequency, a game can just pop a value from RAM and write to $4011. A problem would be that sample bytes would need to be skipped in split-screen effects and VBlank, or using a pseudo-extra channel timer during split-screen IRQs to still write to $4011 would complicate the IRQs and lose time for calculating the split-screen data. Using conventional loops and indexes, mixing two 80-120 byte segments together would also be very time consuming and take up a lot of the frame, leaving less time to actually update $4011. Even if completely unrolled code was used, a lot of PRG code would need to be used and it would still be pretty time consuming, but it wouldn't be as bad. What are better ways to incorporate pseudo-extra channels with $4011?
I was just wondering, but did games ever use similar methods as these - for sound effects or music? Even though it's not a good game (it's an LJN game), WWF King of the Ring seems to mix audience cheers and wrestler grunts together during gameplay, but the two sound effects seem to drown out each other.
What if a game did its own manual sample mixing for $4011 and mixes two sound channels together once per frame? I think such sound channels would need to be split up in one frame segments (50 or 60 HZ segments). When mapper IRQs are being fired at about every 2-3 scanlines or lower frequency, a game can just pop a value from RAM and write to $4011. A problem would be that sample bytes would need to be skipped in split-screen effects and VBlank, or using a pseudo-extra channel timer during split-screen IRQs to still write to $4011 would complicate the IRQs and lose time for calculating the split-screen data. Using conventional loops and indexes, mixing two 80-120 byte segments together would also be very time consuming and take up a lot of the frame, leaving less time to actually update $4011. Even if completely unrolled code was used, a lot of PRG code would need to be used and it would still be pretty time consuming, but it wouldn't be as bad. What are better ways to incorporate pseudo-extra channels with $4011?
I was just wondering, but did games ever use similar methods as these - for sound effects or music? Even though it's not a good game (it's an LJN game), WWF King of the Ring seems to mix audience cheers and wrestler grunts together during gameplay, but the two sound effects seem to drown out each other.