Jump to content
IGNORED

Load Times: Blu-Ray vs. DVD


Ze_ro

Recommended Posts

ROM and RAM have maximum transfer rates. They aren't truly instant access, and there ARE load times(especially if you're using bankswitching). They're just very small and don't require a load screen to remind you that the game isn't busted.

That's true of all most hardware, including built-in RAM. That's not a "load time", it's just the system operation. "Load times" are caused when you need to move large amounts of data from permanent storage to active memory.

Access times, then.

 

And the NES was unique in that it had a chunk of the cart bus mapped directly to the video hardware. Everything else has to copy data from the ROM cart to the video RAM(even if it's just a designated area in the main RAM instead of dedicated VRAM).

 

The main cause of cartridge load time will be compression, though.

Compressed data can't be used as-is, and has to be decompressed into RAM, generating a signifigant delay as the hardware does the decompression. And compression became a common practice once systems were beefy enough to decompress data and stick it in RAM, because ROM was a rather expensive media and reducing the amount of data was always better than more or larger ROMs.

The fact that some companies(Say, Nintendo) controlled the production and restricted what ROM sizes a given developer had access to also fueled the issue.

Granted, I've never mucked inside the details of modern consoles, but I'm not aware of any situations where the data had to be decompressed from a cartridge before use. IIRC, the N64 had a texture decompression engine in the hardware (which helped alleviate the small texture cache issues), but I don't know any of the details. Perhaps CPUWiz could chime in here?

Compression was a somewhat common practice in the 16-bit era, as I understand it.

I know it's present in a lot of RPGs(especially ones from Enix).

 

SNK explicitly forbade it for a long time because it let them inflate the megabit counts that they were waving around as a sign of quality, but they eventually gave up somewhere in the late 90s(what with people not caring and all).

 

 

 

I'm pretty sure the N64 texture compression hardware was for decompressing textures in video RAM immediatly before use. That's the typical use of texture compression.

 

Of course, if you're storing it in VRAM compressed, may as well store it in ROM compressed too and save the system the effort of compressing it in the first place, but...

 

 

Actually, you're assuming that form factor and storage format are related.

There's no reason whatsoever a DS card has to use flash RAM, just like the TurboGrafX cards didn't.

Your logic says that since a Channel F game LOOKS like an 8-track cassette, it has to use a magnetic tape to store data.

 

I'm admittedly not aware of the actual construction of a DS card, but ROM is far more likely for the non-writable portion.

The most likely cause of DS load times is heavy data compression.

(raises eyebrow)

 

I am not assuming that form follows function. I am stating that the function of the DS cards are as a storage medium, and that the data is copied off the card before use. The actual card (from my understanding) contains up to 128MB of Matrix 3D ROM (a multilayer, "fuse" WORM memory), plus some rewritable Flash memory for saving user settings. I don't know the transfer rate or access times, but I'd bet quite a bit that they're not fast enough to be used directly, and are not mapped into the system's memory.

My apologies.

I thought you were stating that "DS uses a memory card like SD cards, MMCs, and MemorySticks, and therefore is slow like SD, MMC, and MemStick."

 

Ergo, the DS uses the game card as a media device, and NOT an integrated chip on the main bus. This design allows it to load its software from multiple sources, including Wifi. The slot on the top, however, is an actual expansion port designed to be pin compatible with the Game Boy Advance circuit cards. Anything plugged into there will be added to the DS's expansion bus.

 

If anyone has specs that refute that, I'd love to see them. :)

I'd love to see a detailed breakdown too.

I love seeing the inner workings laid bare.

...

Especially when I can point and laugh, like on the IBM PC(The 640k barrier is because they started memory-mapped IO at 640k. If it'd been at the bottom, with RAM on top, the barrier never would've existed. Admittedly they weren't thinking about long-term issues, and likely figured on breaking compatibility at the next update when it became a problem, but still... </end_tangent>)

 

 

I suspect the DS card is still directly memory-mapped, though.

Given the wide variety of things that have been memory-mapped in the past, it doesn't really make sense to remove it from the main bus, even if it operates at a vastly slower speed than most of the rest of the system.

 

Granted, I've never mucked inside the details of modern consoles, but I'm not aware of any situations where the data had to be decompressed from a cartridge before use. IIRC, the N64 had a texture decompression engine in the hardware (which helped alleviate the small texture cache issues), but I don't know any of the details. Perhaps CPUWiz could chime in here?

Well, one perfect example here is Street Fighter Alpha 2 on the SNES. There are noticable pauses while it decompresses the character graphics. It's usually only a second or two, so it doesn't give you load screens or anything like that, but it's definitely loading.

 

--Zero

The SA1 coprocessor used to do the decompression also blocked CPU access to part of the ROM area, for what it's worth.

Link to comment
Share on other sites

SNK explicitly forbade it for a long time because it let them inflate the megabit counts that they were waving around as a sign of quality, but they eventually gave up somewhere in the late 90s(what with people not caring and all).

Since most of their business in this respect was arcade machines, it makes a bit more sense to do whatever you can to remove load times. And of course, when you're selling cartridges for $200+, you can afford to throw as many ROM chips as you want in there :P

 

Especially when I can point and laugh, like on the IBM PC(The 640k barrier is because they started memory-mapped IO at 640k. If it'd been at the bottom, with RAM on top, the barrier never would've existed. Admittedly they weren't thinking about long-term issues, and likely figured on breaking compatibility at the next update when it became a problem, but still... </end_tangent>)

I'm not overly familiar with x86 architecture, but I know other processors like the 6800 and 6502 have a zero-page addressing mode that makes it very convenient to have your RAM start at the bottom. I think newer processors were able to redefine their zero-page though, so that might not even apply to x86.

 

--Zero

Link to comment
Share on other sites

SNK explicitly forbade it for a long time because it let them inflate the megabit counts that they were waving around as a sign of quality, but they eventually gave up somewhere in the late 90s(what with people not caring and all).

Since most of their business in this respect was arcade machines, it makes a bit more sense to do whatever you can to remove load times. And of course, when you're selling cartridges for $200+, you can afford to throw as many ROM chips as you want in there :P

Well, yeah, but they could've saved a few Kbytes of ROM and made more profit on the carts.

 

Especially when I can point and laugh, like on the IBM PC(The 640k barrier is because they started memory-mapped IO at 640k. If it'd been at the bottom, with RAM on top, the barrier never would've existed. Admittedly they weren't thinking about long-term issues, and likely figured on breaking compatibility at the next update when it became a problem, but still... </end_tangent>)

I'm not overly familiar with x86 architecture, but I know other processors like the 6800 and 6502 have a zero-page addressing mode that makes it very convenient to have your RAM start at the bottom. I think newer processors were able to redefine their zero-page though, so that might not even apply to x86.

 

--Zero

Good point. I don't really know if that applies to the 8086/8088 or the 286.

Link to comment
Share on other sites

Especially when I can point and laugh, like on the IBM PC(The 640k barrier is because they started memory-mapped IO at 640k. If it'd been at the bottom, with RAM on top, the barrier never would've existed. Admittedly they weren't thinking about long-term issues, and likely figured on breaking compatibility at the next update when it became a problem, but still... </end_tangent>)
I'm not overly familiar with x86 architecture, but I know other processors like the 6800 and 6502 have a zero-page addressing mode that makes it very convenient to have your RAM start at the bottom. I think newer processors were able to redefine their zero-page though, so that might not even apply to x86.
Good point. I don't really know if that applies to the 8086/8088 or the 286.

The memory mapped I/O to which you refer was a byproduct of the IBM PC and PC AT architecure, not the x86 processor. The x86 used a 20bit segmented memory scheme that allowed for an upper 16bit selector to be used to select the segment, and a lower 16bit value to address the memory location inside that segment. Because the address line was only 20 bits rather than 32, the 16 bit segments overlapped every 16 (I think it was?) bytes.

 

When the PC architecture was developed, they decided to map the upper area of memory to allow for easy and unbroken expansion of the gap between the lower memory and the reserved area. Since the 8086 couldn't address more than 1MB, this was not seen as a problem. However, when the AT was developed, they realized that the 640K-1M gap would need to be maintained for backwards compatibility. As a result, the A20 line (which makes 286 and 386 mode addresses wrap around like the 8086 addresses did) was added and the system was configured to boot into Real Mode rather than 286 protected mode.

 

(This was all documented in Norton's famous "Pink Shirt Book" in case you're interested.)

 

On today's x86 computers, our MMUs are still configured to map the video areas into that 640K-1M memory gap to maintain backward compatibility with software from 20 years ago. You could eliminate this gap with no ill effects, but that would require you to not use any software that accesses text mode through $B000 or graphics mode through $A000. The same thing would be true of many other 80's computer architectures if they were still around, but it so happens that we don't need to maintain backward compatbility with any of them. :)

Edited by jbanes
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...