Jump to content
IGNORED

5200 vs. 7800


jbanes

Recommended Posts

Definitely faster since you have hardware acceleration. Technically you're still in a graphics mode, but the blitting is being done automatically by the accelerator.

 

On a CGA card, there is a 6845 chip which basically serves as a bunch of interconnected counters with some logic that generates sync and blanking signals, a "cursor" signal, and display addresses. In 80 column mode, once every 558ns, during the displayed portion of the screen, the following sequence of events happens:


  •  
  • The 6845 outputs an address. This address is fed to bits A1-A13 of the RAM; bits A0 and A14 are clear.
     
  • The data from the RAM appears on D0-D7, which are tied to A3-A10 of a ROM chip holding the character set; bits A0-A2 of the ROM are connected to the 6845 which outputs a scan line count.
     
  • 279ns after the first address was output, the output of the ROM is latched into a pattern latch and A0 of the RAM is set.
     
  • 279ns after that, the second byte of data from the RAM is latched into an attribute latch. The pattern latch is latched into a shifter. The 6845 receives a lock pulse and outputs the next address.
     
  • The top bit of the address latch goes through some logic with a blink-mode-enable latch and bit 4 of a hardware frame counter. Bits 0-3 of the latch feed directly into the "foreground color" input of the mux in the next stage; bits 4-6 feed into the "background" color input. Bit 7 is ANDed with an inverted output of a blink-mode latch before it too is passed to the "background color" input. Further, a "blink-blank" output is generated by NANDing together the blink-mode latch, bit 7 of the attribute latch, and bit 4 of the frame counter.
     
  • The above sequence of events is repeated for the next display character while the shift register outputs the 8 bits that were just latched (they'll finish shifting in 558ns). The output of this shift register is ANDed with the blink-blank output above; that then controls the select input on a 2x4 mux which is driven with the upper and lower halves of the attribute latch.
     
  • The output of that mux is fed to the monitor.
     

Where is the blitter? Once the character codes are fed through the ROM, the resulting data are held in latches for less than a microsecond before they vanish into the ether. They're not stored anyplace.

 

What matters is the result, not the means.  Would you object to saying that Adventure uses a frame counter at addresses $AF and $A0 to control color flashing when the game is idle?  The 2600, after all, has no hardware frame counter unlike some other computers (the CGA has a hardware frame counter that blinks the cursor, e.g.)  So is it wrong to refer to $AF/$A0 as a frame counter?

 

Yes, it's a frame counter done in software. But since the processor wasn't purpose built for this specifc task, you can't call the 6502 a frame counter.

 

I didn't say the 6507 was a frame counter. I said the 33rd and 48th bytes of RIOT chip were used as a frame counter. In other cartridges, those bytes do other things, but in Adventure, they serve as a frame counter.

 

Just as how you can't call the ANTIC a framebuffer. The ANTIC is a graphics co-processor (generally referred to as a graphics accelerator) that drives the GTIA to produce a video signal. You can write software that makes it act like a framebuffer, but it's still a "soft" implementation.

 

I didn't say the ANTIC was a frame buffer. A buffer holds things, and the ANTIC doesn't have anything near enough memory to hold a frame of non-trivial video.

 

A frame buffer is an area of memory that is clocked out to the display in such fashion as to produce a generally-unchanging correspondence between memory locations and screen locations.

 

Close. A framebuffer is a device that produces a generally-unchanging correspondence between memory locations and screen locations. More generally, it's a device that produces a direct correspondence between a given memory heap and the output of the video display. The memory doesn't have to mapped to a given location. That's primarily a feature for easy addressing. Framebuffer cards exist that allow their memory to be accessed via ports, bankswitched memory locations, and other "fun" (*cough*) tricks.

 

A display controller may use RAM as a frame buffer, but unless the RAM is built into the display controller, the display controller itself is not a frame buffer. I will agree that there are times when it is reasonable to refer to a macroscopic device, as opposed to just an area of RAM, as a frame buffer. Such situations most commonly arise when dealing with things like broadcast video where a device is used that takes video in and holds it for awhile (typically 0-17ms) before outputting it; someone using the device would have no interest whether it used RAM, shift registers, bucket-brigade devices, delay lines, or other technologies provided it met the necessary quality and timing requirements.

 

A graphics accellerator is a device for shovelling large amounts of data into and within a frame buffer.

 

Close. A graphics accellerator is a device to translate abstract graphics commands (commonly 2D or 3D vector commands) into a more displayable form. That could be framebuffer data (as with many rasterizers) or it could be a set of commands (as with the ANTIC's control of GTIA).

 

The ANTIC is a strange bird (likewise the Amiga's COPPER and a few other things). They could be called accellerators, but they are functionally very different from the graphic accellerators typically associated with frame buffers.

 

Frame buffer writes are not timing-critical. Faster is better, but if when running a game the processor is for some reason only able to update 60 pixels per second the proper graphics will still appear, eventually. The function of the accellerator is to make things happen faster from a USER perspective.

 

By contrast, the types of control register updates performed by the ANTIC or COPPER are timing critical. If a background-color-change even happens 1/1000 of a second late, the user won't notice that the event took longer than it should have to appear; what the user will notice is that the background color change occurred about 16 scan lines below where it should have. From a user perspective, the "speed" at which the ANTIC/COPPER do their thing doesn't affect the speed of the system at all. It may have other visible effects (e.g. the number of blank scan lines between "screens" on the Amiga) but the term "accellerator" is usually used to refer to things that make thins appear faster.

 

A video overlay generator is a device to inject an alternate source of video data for part of the frame, often obscuring data read from a framebuffer.

 

Drop the part about the framebuffer and you've got it spot on. Keep in mind that sprite hardware is also a video overlay device. (The sprite overlays the background in the video signal.) Last I checked, there's no framebuffer driving the 2600.

 

I said "often". The 2600 is the notable exception.

Link to comment
Share on other sites

I was going to correct some of the more glaring errors in jbanes' latest post, but for all the good it would do I think I'll find this more satisfying--

 

jbanes, you are a clueless fluff-wit.

 

Ahh, that felt good. Anyway...

 

The Sun usage of "framebuffer" is nonstandard. Get over it.

998099[/snapback]

 

Heh. I suppose I enjoy sometimes expounding on technical stuff when I should be coding instead, but your post is probably more efficient.

Link to comment
Share on other sites

I am a hardware/software engineer who has designed and implemented a 320x200x4gray LCD controller for an 80186-clone single-board computer (which had a DMA controller, but no built-in video of any sort).  I know what's entailed in display generation.  I would refer to the area of memory that I had the DMA controller clock out to the display as a framebuffer, even though the designers of the SBC probably never imagined it as such.

 

Nice to meet you. It sounds like you created a Shared Memory Architecture framebuffer through the DMA controller. Since I'm assuming that's it's designed puropse, I see no issue with calling it a framebuffer. The primary difference is that you're driving an LCD display (a fairly digital process, and nowhere near as finicky on the timing) rather than an electron beam.

 

The concept of a framebuffer is to buffer the data for driving the electron beam in much the same way you use a FIFO buffer to drive a serial port. The biggest difference is that when framebuffers allow random access to the video buffer, they become useful for emulating the capabilities of sprite hardware.

 

For an LCD display, driving the display means that there's no need to worry about the timing of a physical beam. (No HSync, no VSync, no unviewable area, etc.) You just cycle through the pixels one by one, preferrably fast enough to prevent them from fading. Most LCDs have a driver chip that does this automatically, allowing for a variety of signals to be sent to the display. Some even allow internal programs running on an internal microprocessor so that you can interpret the signals into something usable by the display. (e.g. Most LCD TVs emulate the NTSC signal for backward compatibility. They have no need for such complex signals, though.)

 

Frontbuffer and backbuffer are appropriate terms for systems that have more than one framebuffer and provide the ability to switch among them.

 

Indeed. Otherwise it's just referred to as the "video buffer", "display buffer", or "graphics buffer". These terms are pretty much interchangable, though the latter is often applied to per-pixel modes while "display buffer" is often used to refer to text mode data.

 

This is how I'm referring to it. The ANTIC is not a true framebuffer device. It actually has more in common with modern 3D Vector hardware in that it is a separate processor that queues up commands for just in time processing.

 

Can the ANTIC write to RAM at all? It might be useful in some cases if it could but I know of no such ability.

 

I'll grant you that I'm far from an expert on the ANTIC, but as far as I know, no. Why does it need to? (i.e. What are you getting at?) It takes commands in from RAM, and outputs them to the GTIA. The GTIA then drives the video signal.

 

Probably the biggest difference I see is that the ANTIC doesn't have its own bus. Performance-wise, you just can't do much worse than that.

 

Sure you can. How about a display controller that can't fetch anything from memory at all and must be spoon-fed by the processor?

 

As I said: "you can't get much worse than that". No engineer in his right mind would spoon feed the display controller unless he a)dedicated the processor or b) didn't actually need the processor time. Granted, the 2600 gets close, but it was breaking new ground in making formerly expensive features inexpensive.

 

In 40-column text mode, the Atari runs a dot clock of 7.16Mhz (chroma*2), a character clock of 0.84Mhz, a memory clock of 1.79Mhz, and the CPU also at 1.79Mhz.  Every frame will represent 29,344 memory/CPU cycles of which something over 9,000 are consumed by the display leaving about 20,000.  The Apple II also runs a dot clock of 7.16Mhz, but the character clock is 1.02Mhz.  The CPU clock is also 1.02Mhz, and the CPU alternates memory fetches with the display hardware.  Each frame represents 16,768 cycles, all of which are available to the CPU.

 

Seems the Atari doesn't do too badly.  There are some real advantages to having a fixed alternation of display/processor memory access so I can't fault the Apple for its approach, but I wouldn't condemn the performance of the Atari.

 

Mmm. Except that my argument was based on the fact that the ANTIC could have had a separate bus. This would have greatly improved performance over the existing solution.

 

The Apple advantages are:

 

1. The Apple gets away with 100% of its cycles vs. the 68% of the Atari. The Atari advantage is entirely in the CPU clock. Had Apple released a higher speed version in the 80's, the Apple II could have easily outpaced the contemporary 5200. Instead, it gave it a run for its money using tech from 1977. (The same year as the 2600 was released.) Had the ANTIC been able to read its commands and data without disturbing the processor, it would have left the CPU with 100% of its time as well.

 

2. The Apple doesn't have to render 60 frames per second. The framebuffer allows it to skip several frames, thus giving it more processing time. 30 FPS was considered silky smooth back then (still is, really), meaning that you could double your processing time to 33,536 cycles. The difference between 30 and 60 FPS would likely go unnoticed by gamers.

 

And why are you referring to character mode? The primary comparison is between the Apple II and the 5200. Unless I missed something, the 5200 was not booted into character mode very often.

 

Agreed. :) The PC had an accelerated framebuffer because it was designed for "serious" home and business applications. The use of a framebuffer and character ROM greatly simplified the programming, making it much more useful as in business and BASIC programming.

 

The CGA framebuffer could only be written during hblank or vblank when using 80-column text mode. This severely limitted screen update speed. Graphics mode did not have this limitation. Performance of CGA games was often in fact somewhat better than Apple II counterparts.

 

Indeed. It's worth noting that the Apple II was also not a "game machine" either, despite its reasonably good capabilities. Both the PC and the Apple II were often pressed into service, however. The greatest issue I took with CGA mode was that it was just plain ugly. The choice of cyan, magenta, and white did not make for very good graphics. The few games I remember playing on it, however, worked well enough. There just weren't many of them. The CGA 320x200 graphics mode did have an alternate color palette of red, green, and brownish-orange, but I can't ever remember it being used.

 

IBM actually made a move away from supporting gaming by introducing the far superior EGA graphics adapter. The EGA adapter was capable of up to 640×350 with 16 colors from a 64 color palette. It looked great for business applications and basic graphics work. However, this resolution easily outstripped the ability of many programs to keep up the rendering, making slow programs and screen tearing a common occurance. Some popular games were made for this mode (e.g. EGA Trek, EGA Asteroids, and Where in Time is Carmen Sandiego), but it was otherwise ignored until the introduction of VGA and the 286 processor.

 

unnumbered - Hack 80-column character mode so characters are two scanlines high, yielding a pseudo 160x100 16-color mode.

 

As I remember, PC-BASIC used to be able to access this mode by setting the characters per line correctly. I was a bit disappointed when I found that this trick didn't work in the later GW-BASIC language.

 

SuperCat: "Even in character mode, I see no reason not to refer to it as a framebuffer device since the display was generated by clocking data out of RAM in real time with a 1:1 correspondence between RAM and screen locations." (Sorry, the board breaks if this is in quote tags)

 

Um, no. The resolution of the screen was still 320×200 or 640×200, but the CGA card was automatically looking up the bitmaps and producing the correct multi-pixel video signal. True framebuffers are generally dumb devices that produce a signal based on data fed to them according to various timing registers. By fiddling with the timing registers (such as the undocumented VGA "Mode X") you could directly modify the resolution of the video signal.

 

Edit: I'll respond to your other reply tomorrow. Right now I need to hit the hay.

Edited by jbanes
Link to comment
Share on other sites

Heh... "fluffwit"...

 

Regarding ANTIC, if you ignore the line-by-line mode-change ability (which is, granted, its most notable feature), it's basically just a DMA controller. So in that sense it's not terribly unusual.

 

Oh wait, EDIT--

 

The Apple doesn't have to render 60 frames per second.

Neither does the Atari. You can put the Atari's 6502 into an infinite loop and the display won't go anywhere. The CPU has absolutely zero involvement in outputting the video display.

 

And why are you referring to character mode? The primary comparison is between the Apple II and the 5200. Unless I missed something, the 5200 was not booted into character mode very often.

Yeah, if by not very often you mean every single time you turn it on.

 

You sure are wrong a lot, aren't you?

Edited by ZylonBane
Link to comment
Share on other sites

To hell with it. I can't sleep now anyway.

 

Heh... "fluffwit"...

 

So, let's get this straight. From the moment this thread started, you've been nothing but negative and abrasive toward the dicussion. Your comments directed at me have been nothing but inflammatory, and wanting in any real technical data. All the while you claim you're "trying to mix a little information" of which you'd provided zero. You got supercat to charge in on arguing semantics he doesn't understand either, and you haven't given to a single point ANYONE has made in this thread.

 

Now you have the gall to try to start a flamewar by casting insults?

 

I'll tell you what. How about we settle this right here and right now? We'll solve the matter of the framebuffer, permanently.

 

To recap the overall situation, you contest on the basis of (lemme see here), 1 stub of an article, 4 non-authoritive sources, and 3 sources that don't disagree with a thing I've said (I'll let you figure out which is which), that the "framebuffer" as a device is an invention of Sun Microsystems' marketing, and that everyone else in the world only uses the term to refer to any memory anywhere that used to hold a frame of data, regardless of whether or not it's driving a monitor. Is that correct?

 

Well, let's have a contest to prove it. If you can prove that I'm not right in a fixed amount of time, you win. If you give up and I can't prove I'm correct by the end of that time, you win. What do you say? Are you up for a simple challenge that will settle this matter once and for all? Shall we match wits and see who's are "fluffier"?

 

Supercat, how about you? You in?

 

Edit: Clarified framebuffer defintion as per Bryan's post.

Edited by jbanes
Link to comment
Share on other sites

It's the middle of the night and I have to get ready for work so I have to make this quick....

 

 

But isn't framebuffer device a different concept than framebuffer? Wouldn't framebuffer device imply that it's a self-contained implementation of a framebuffer? Isn't it possible that Sun's definition of a framebuffer is application specific because Sun systems use video cards with their own RAM?

 

If I was reading Atari documentation, wouldn't it show me a specific 9-pin plug and call it a controller port? Could I then argue that the Nintendo connector isn't a real controller port?

 

Also, why does something stop being something if it uses a common bus and bus cycles? Did you know the Amiga's Paula steals CPU cycles to play music? Does this make it less of a sound chip than SID?

 

-Bry

Edited by Bryan
Link to comment
Share on other sites

http://www.sunhelp.org/faq/FrameBuffer.html#00

 

In its simplest meaning, and as far as the Graphics hardware engineers are concerned, a frame buffer is simply that; the video memory that holds the pixels from which the video display (frame) is refreshed.

 

Reading the rest of that particular paragraph shows that Sun is simply replacing the words "vram" with "frame buffer".

 

Now, it's neccessary to look at the type of computers Sun built. PC type machines, where everything was a seperate add-on. This is important to note because Apple computers were simmilar in that they were designed to be modular. This is a critical point. Because of the modular nature, common resources can't be used because of the ever changing memory map you would have anytime you added/removed/changed anything, breaking any code that didn't specificly check for every known combination of hardware.

 

Further more, you need to understand that before this, video output was entirely software generated at the cost of CPU time. Like the 2600. The video output didn't have memory, couldn't access memory, you had to hold it's hand and tell it everything. This only serves to make it harder to generate video, is very cpu intensive, and wastes coding space. THIS is what a NON "frame buffer" video output is. Video output without memory. Using a video output that accesses memory eliminates this, requiring only that you dump your data into the respective ram, and it appears on the screen as an independant function of the video output.

 

Now consider C-64/A800 series computers. They were not designed to be modular, but rather fixed to a specific overall hardware design. Generaly speaking, everything will always be in the same place, and thus you are able to use common resources. There is no need for 'seperate' vram, because the video output already has access to vram in the form of those shared system resources. Functionality is the same. Video output is assigned a bank of ram out of a common pool, and nothing is required beyond dumping what you want shown into that memory, and it appears. Nothing is being "emulated". The video output has independant access to it's own reserved chunk of memory.

 

With that, lets repeat what Sun Microsystems, your favorite authority on the matter, said:

In its simplest meaning, and as far as the Graphics hardware engineers are concerned, a frame buffer is simply that; the video memory that holds the pixels from which the video display (frame) is refreshed.

 

So lets recap, shall we:

 

Frame buffer = Video output fed from memory resources.

Non Frame buffer = Video output generated by software.

 

'k? :roll:

Link to comment
Share on other sites

I'm genuinely at a loss at to why jbanes thinks the Apple has a framebuffer, but the Atari doesn't.

 

Apple: Generates video from a chunk of system RAM.

Atari: Generates video from a chunk of system RAM.

 

Apple: Can generate several video modes depending on its configuration.

Atari: Can generate several video modes depending on its configuration.

 

Apple: Requires no CPU intervention to generate display.

Atari: Requires no CPU intervention to generate display.

 

And yet, he's decided that the Apple has a framebuffer, but the Atari doesn't. Someone please explain this to me.

Link to comment
Share on other sites

I'm genuinely at a loss at to why jbanes thinks the Apple has a framebuffer, but the Atari doesn't.

998240[/snapback]

 

Way to avoid the challenge, there. If I really am as witless as you make me out to be, then why not take me up? The gauntlet is on the ground. Are you going to pick it up?

 

I'll explain (if you're actually interested in hearing). I've said many, many, many times that the ANTIC can emulate a framebuffer. The ANTIC isn't designed as a framebuffer (rather, more of a natural evolution of the playfield design of the Atari 2600), but it can be programmed to act as one. Once it's programmed to act as one, it is "emulating" a framebuffer. From the dictionary:

 

3. Computer Science. To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system.

 

1. To take as a model or make conform to a model: copy, follow, imitate, model (on, upon. or after), pattern (on, upon. or after).

 

Why you find that to be such a hard concept to grasp, I do not understand. The end result is the same: You have a working framebuffer, sans the drawbacks of the ANTIC/GTIA system. Yet you insist that I'm "wrong". From there, you and Supercat have come to maintain that a "framebuffer" is (precise definition here) nothing more than a buffered frame in memory. By that definition, you could have a machine without a video output and still have a framebuffer.

 

I maintain that the precise definition of "framebuffer" is a device that drives a display based on a buffered grid of "Picture Element" (pixel) data.

 

So, either we can agree that we just have different terminology (and you can stop casting insults at people), or we can complete a challenge to show who is "correct". What say you?

Link to comment
Share on other sites

I'll explain (if you're actually interested in hearing). I've said many, many, many times that the ANTIC can emulate a framebuffer.

This is never going to end until you admit that your usage of "framebuffer" is in the minority. Yes, in some contexts a framebuffer describes dedicated video display hardware. However, in the overwhelming majority usage this is not the case. You keep repeating "ANTIC isn't a framebuffer! ANTIC isn't a framebuffer! Squawk!", oblivious to the fact that NOBODY IS SAYING IT IS. What many people have said to you many times, using iteratively smaller and smaller words, is that the MEMORY is the framebuffer. And yet, you refuse to grasp this point. It's like you've brainwashed yourself or something.

 

Incidentally, in the Linux world, a framebuffer is "a hardware-independent abstraction layer". That's right, it's a completely software concept. Do you want to call up every Linux dev in the world and tell them they're using the wrong word?

Edited by ZylonBane
Link to comment
Share on other sites

What many people have said to you many times, using iteratively smaller and smaller words, is that the MEMORY is the framebuffer.

 

Which I keep repeating and repeating that this is a colloquial defintion that is not precisely correct. According to this definition, rendering a frame of animation to memory is a "framebuffer". But technically, this is incorrect. A framebuffer is a device that uses buffered pixel data used to drive a display. Period, end of story. That's a framebuffer.

 

And yet, you refuse to grasp this point. It's like you've brainwashed yourself or something.

 

I grasp your point just fine. I keep having to tell you that your definition of framebuffer, while acceptable in many non-technical circles, is skewed. You don't seem to want to follow that.

 

Incidentally, in the Linux world, a framebuffer is "a hardware-independent abstraction layer". That's right, it's a completely software concept. Do you want to call up every Linux dev in the world and tell them they're using the wrong word?

998265[/snapback]

 

The Linux Framebuffer driver is a "Virtual Framebuffer" similar to the X Virtual Framebuffer. It emulates the hardware, allowing programs designed for a real framebuffer to operate. From your link:

 

The Linux framebuffer (fbdev) is a graphic hardware-independent abstraction layer to show graphics on a console without relying on system-specific libraries such as SVGALib or the heavy overhead of the X Window System.

 

It was originally implemented to allow the Linux kernel to emulate a text console on systems such as the Apple Macintosh that do not have a text-mode display, and was later expanded to Linux's originally-supported IBM PC compatible platform, where it became popular largely for the ability to show the Tux logo on boot up.

 

You'll note that nowhere in your link does it say, "a framebuffer is memory". It clearly states that the driver is a software emulation of a hardware device. It has been expanded to provide flat memory emulation for the PC VESA modes, which are usually bank switched.

 

From the Xfvb man page:

 

Xvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.

 

Did you catch that? It EMULATES a dumb framebuffer using virtual memory. In case that didn't sink in, let me repeat it. It EMULATES a dumb framebuffer using virtual memory.

 

If it was just a memory backing as you say, Xvfb would be a framebuffer rather than emulating one.

 

The problem is that at some point along the line, people didn't understand the technical nature of a framebuffer and started referring to buffer memory as the framebuffer itself. This is incorrect, even though it's an error that's often repeated.

Edited by jbanes
Link to comment
Share on other sites

Then you just go on saying "dumb framebuffer", and we'll all carry on with the modern usage.

998288[/snapback]

 

Fine with me. I never demanded anything else.

 

And you owe a lot of people apologies for train wrecking a perfectly good conversation.

 

For those of you who are interested in the historical usage, I prepared a few links for the challenge that ZylonBane refused to accept. You may find this to be of interest if you're the type who enjoys studying the history of computer science.

 

------------

 

In 1969 Joan Miller experimented with a paint program on a 3-bit framebuffer developed at Bell Labs(1). While the concept of a framebuffer had been theorized about for quite a long time, this was the first known example of such hardware.

 

In 1972, Richard G Shoup created the first complete, fully functional framebuffer along with a paint program to utilize it. This system was dubbed "SuperPaint"(2), and had a user interface very similar to paint programs we use today. The hardware was implemented as a 307,200-pixel shift register, allowing pixels to be accessed only when the specific scan line and pixel time were reached. This shift register was synchronized with the television scan rate. Richard also implemented the ability to read in a video signal by synchronizing the television signal between the inputs and outputs. This allowed SuperPaint to also be the first example of a video capture system. The complete SuperPaint system currently resides in the permanent collection of the Computer Museum History Center in Mountain View, California.

 

In 1974, Evans & Sutherland brings the first commercial framebuffer(3) (designed by Jim Kajiya with full Random Access Memory(6)) to the market. The device costs upwards of $50,000, but starts a revolution in graphics development across(5) Universities nation wide.

 

Within a few years, memory starts to become cheap enough to allow devices like the Apple II to contain framebuffers. By the 1980's, Unix manufacturers began appearing to provide high quality graphics workstations to the market. SGI (7)(8 ), HP(9), DEC(10), and Sun Microsystems(11)(12) all released framebuffers throughout the 80's, and well into the 90's.

 

Development didn't stop there, however, and manufacturers began to add Graphics Acceleator chips to accelerate their frame buffers for text modes, graphic primitives, and many other features used by the emerging GUI systems. The final result is the highly advanced 2D graphics cards we have today. They couple a graphics accelerator, framebuffer, and video overlay device to produce high quality imagery at blistering speeds. Many also include 3D Vector Processors which can be used to rasterize millions of 2D or 3D vector shapes per second to the framebuffer.

 

------------

 

1. http://accad.osu.edu/~waynec/history/PDFs/Annals_final.pdf

2. http://accad.osu.edu/~waynec/history/PDFs/14_paint.pdf

3. http://accad.osu.edu/~waynec/history/lesson15.html

4. http://www.siggraph.org/movie/

5. http://accad.osu.edu/~waynec/history/PDFs/paint.pdf

6. http://research.microsoft.com/users/kajiya/

7. http://scanimate.zfx.com/DVD2T.html

8. http://hardware.majix.org/computers/sgi.iris/iris3130.shtml

9. http://openpa.net/systems/snakes.html

10. http://q.dyndns.org/~blc/DS3100/specs.html

11. http://www.sunhelp.org/faq/FrameBufferHistory.html

12. http://www.sunhelp.org/faq/FrameBuffer.html

 

And with that my friends, I bid this thread adieu. Thank you to those of you who had positive contributions to add. I hope that we can intelligently discuss many of the points discussed at a future date, in a hopefully less hostile forum. Good day.

Link to comment
Share on other sites

Then you just go on saying "dumb framebuffer", and we'll all carry on with the modern usage.

998288[/snapback]

 

Fine with me. I never demanded anything else.

 

And you owe a lot of people apologies for train wrecking a perfectly good conversation.

 

How about I get one from you for having a coniption fit over the fact that you misread my OPINION as a comparison?

 

It takes two to trainwreck a thread, and you started very early on here.

Edited by Danno
Link to comment
Share on other sites

It looks like the only thing this thread accomplished was to serve as yet another cautionary tale for the next time anyone tries to get in argument with jbanes.

998364[/snapback]

 

 

Among other people. :ponder:

Edited by jetset
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...