Jump to content
IGNORED

BYTE Magazine


Recommended Posts

I used to do that until I found files miscomparing between drives. I've got two full 640gb drives that I compare every year. And every year I find a file or two that miscompare.

 

This is totally off topic, but there are tools to deal with this. Rsync reads back the copy it makes and calculates a checksum to ensure the copy was made correctly. If you're worried about data going bad on disk, you can include redundancy in the form of PAR2 files. If you're not beholden to Microsoft, you can even use filesystems that automate all this data integrity stuff for you, e.g. zfs or btrfs.

 

Sorry about that, please continue with your regularly scheduled Byte releases. :cool:

  • Like 1
Link to comment
Share on other sites

I used to do that until I found files miscomparing between drives. I've got two full 640gb drives that I compare every year. And every year I find a file or two that miscompare.

 

This is totally off topic, but there are tools to deal with this. Rsync reads back the copy it makes and calculates a checksum to ensure the copy was made correctly. If you're worried about data going bad on disk, you can include redundancy in the form of PAR2 files. If you're not beholden to Microsoft, you can even use filesystems that automate all this data integrity stuff for you, e.g. zfs or btrfs.

 

Sorry about that, please continue with your regularly scheduled Byte releases. :cool:

 

 

Before I restore data from a pair of mission critical backups, I compare each file (or directory) using something like easy duplicate finder. It doesn't matter if the data is written and verified at time of backup. You need to verify the data just prior to restoring it.

Link to comment
Share on other sites

BYTE Vol 00-08 1976-04 Automation - 100 Pages 58,220,082 bytes

 

BYTE Vol 00-08 from April 1976... Now THAT is an awesome cover! I thought Jaws was released in 1977 so now I know where Spielberg got the idea for the main Jaws poster :) Make sure you read the "Letters" column in this issue - lots of stuff about star trek and space war games. The article on the magic of computer languages was also very good. The rest is awesome fodder for the low-level hardware guys.

 

A few nice little diddies from page 16:

 

Glorobots

 

Rx

A robot was having conniptions

at reading handwritten inscriptions,

but acquired the knack

by decoding a stack

of typical doctor prescriptions.

 

Evolution

A self-evolved robot named Babbitt,

because of his dubious habit

of unbridled mating

and self-propagating,

was housed in a hutch like a rabbit.

 

Hear Ye Hear Ye

The sensory robots are near,

but will not be ready this year,

for each of them tries

to eat with his eyes,

and cocks his nose trying to hear.

 

In this issue....

 

Foreground

BIORHYTHM FOR COMPUTERS

HOW TO BUILD A MEMORY WITH ONE LAYER PRINTED CIRCUITS

AARGH! (or, HOW TO AUTOMATE PROM BURNING WITHOUT EML)

CONTROLLING EXTERNAL DEVICES WITH HOBBYIST COMPUTERS

INTERFACE AN ASCII KEYBOARD TO A 60 mA TTY LOOP

DESIGN AN ON LINE DEBUGGER

I0 STROBES FOR THE ALTAIR 8800

SAVE MONEY USING MINI WIRE WRAP

 

Background

PROGRAMMING THE IMPLEMENTATION

THE MAGIC OF COMPUTER LANGUAGES

THE SR-52: ANOTHER WORLD'S SMALLEST

FRANKENSTEIN EMULATION

MICROPROCESSOR UPDATE: TEXAS INSTRUMENTS TMS9900

 

Nucleus

In This BYTE

Customization-The Expression of Individuality

Letters

Space Ace Revisited

What's New

BYTE's Bits

Technology Update

BYTE's Bugs

Classified Ads

Book Review

Clubs, Newsletters

BOMB

Reader's Service

 

Download it here: BYTE Vol 00-08 1976-04 Automation

 

 

Cover

 

post-12606-129631460766_thumb.jpg

 

Index

 

post-12606-129631462133_thumb.jpg

 

Actually, Jaws was a 1975 release.

http://www.imdb.com/title/tt0073195/

 

Back then, there were only two theaters locally, and both only single screens. One of them kept Jaws for the whole summer, with Phantom of the Paradise as the second feature. The repeated exposure to the combination left me twisted for life.

Edited by epobirs
  • Like 1
Link to comment
Share on other sites

Or if the drive has failed. Drives are built no-where near as well as they once were. I have an 80gb that's not even 5yrs old that was filled and put away start clicking when I went to retrieve the data. I have a SCSI raid full of 9gb 2nd generation Seagate Cheetah drives that I booted up after almost a decade and they worked perfectly. You get what you pay for--the Cheetah drives were $1100 each back in the 90s.

 

I really have to dispute this. Inexpensive consumer drives are built to standards today the industry couldn't dream of delivering in earlier eras. What was once reserved for the very expensive high end units is now just typical of the stuff on the shelf at Best Buy. I've been involved in data recovery as one of my jobs for nearly twenty years. The rate of drives just up and dying for no apparent reason is almost non-existent compared to what once passed for normal.

 

The thing to keep in mind about that five year old 80 GB drive is that by that time it was a niche market item largely aimed at the corporate desktop market. For these machines the local storage needs are very low as most data is kept on the server(s). This market would be satisfied with 40 GB drives still but the drive makes will only do so much to accommodate this market. (Personally, the only spinning platter drive smaller than 100 GB I still use regularly is an old Apricorn 1.8" USB model that is very handy but quite obsolete.) Drive reliability is also a lesser concern for this market as the effort of replacing a failed drive with a new unit containing the standard image is trivial. So the likes of Del and HP keep those drives around to make the idiot CFOs think they're getting a bargain but nobody else should be bothered.

Link to comment
Share on other sites

Actually, Jaws was a 1975 release.

http://www.imdb.com/title/tt0073195/

 

Back then, there were only two theaters locally, and both only single screens. One of them kept Jaws for the whole summer, with Phantom of the Paradise as the second feature. The repeated exposure to the combination left me twisted for life.

I'd never heard of Phantom of the Paradise until now... and after looking at the pictures google brought up I don't think I missed much.

<edit>

93% on Rotten Tomatoes... seriously??

Edited by JamesD
Link to comment
Share on other sites

This is totally off topic, but there are tools to deal with this. Rsync reads back the copy it makes and calculates a checksum to ensure the copy was made correctly. If you're worried about data going bad on disk, you can include redundancy in the form of PAR2 files. If you're not beholden to Microsoft, you can even use filesystems that automate all this data integrity stuff for you, e.g. zfs or btrfs.

 

Sorry about that, please continue with your regularly scheduled Byte releases. :cool:

Thank you for this post! I had no idea zfs existed. That's what I need. And it looks like FreeNas has an implementation of it. Man that would save a lot of hassle!
Link to comment
Share on other sites

I really have to dispute this. Inexpensive consumer drives are built to standards today the industry couldn't dream of delivering in earlier eras. What was once reserved for the very expensive high end units is now just typical of the stuff on the shelf at Best Buy. I've been involved in data recovery as one of my jobs for nearly twenty years. The rate of drives just up and dying for no apparent reason is almost non-existent compared to what once passed for normal.

 

The thing to keep in mind about that five year old 80 GB drive is that by that time it was a niche market item largely aimed at the corporate desktop market. For these machines the local storage needs are very low as most data is kept on the server(s). This market would be satisfied with 40 GB drives still but the drive makes will only do so much to accommodate this market. (Personally, the only spinning platter drive smaller than 100 GB I still use regularly is an old Apricorn 1.8" USB model that is very handy but quite obsolete.) Drive reliability is also a lesser concern for this market as the effort of replacing a failed drive with a new unit containing the standard image is trivial. So the likes of Del and HP keep those drives around to make the idiot CFOs think they're getting a bargain but nobody else should be bothered.

I have to dispute what you're saying. Find the MTBF on an older drive and a consumer one made today. The consumer drive MTBF isn't the same. The aforementioned Seagate drives have a 1,000,000 hour MTBF. Most consumer drives have a 400,000 hour MTBF with an 'office' duty cycle--not the 24x7 duty cycle testing of the past.

 

When I bought the 80GB drive is was one of the largest capacity USB external drives available, and it was so new that was only available in USB 1.0. This was a top-of-the-line product, on par with other higher-end products of the time.

 

I don't trust any drive from this decade for more than a year or two. I have two Western Digital 1gb drives that were manufactured in 1996 that were recycled after seeing duty in harsh environments, and they still work fine. And I have 4 failed drives from this decade--more than the whole history of my computing with PCs back to 1988. (I've only had one Maxtor lxt213s and two Quantum 4gb Atlas drives fail.)

 

A $60 drive is built better than one that cost $1000? I think things have gotten better, but not enough to warrant a 10x reduction in cost without compromising quality. And this cost reduction doesn't even account for inflation. There's just no way.

  • Like 1
Link to comment
Share on other sites

I'm sorry but MTBF is irrelevant to this discussion and doesn't really mean anything about the quality of a product.

 

A MTBF of 1.000.000 hours means that in a population of drives of let's say 1000, one will fail every 1.000.000 hours of operation.

 

So you have 1000 drives, 24 hours in a day, 1.000.000 hours / 24x1.000 = 41.6 , which means hard drive manufacturer guarantes estimates there's going to be only one failure within 41.6 days. But this is also a MEAN time, an average, which can mean out of those 1000 drives, 20 will fail in 2 days and 800 can fail in 500 days.

 

Older drives might me a bit more reliable simply because they have less platters inside and less disk heads and the technology inside allowed for the disk heads to stay further away from the surface, preventing "stickiness" or surface scratches due to accidental shocks.

Newer drives have the disk heads much closer to the surface and the data density is bigger (there's more content per square inch of a platter) but keep in mind the quality of construction (precision and accuracy) is much better than in the old days and while the density of the data causes reading errors all the time, the processors on the disk PCB are much more powerful and can recover and correct errors without user noticing.

 

In my opinion, the best backup solution today would be to get 2 or more disk drives from different series (not made in sequence on the same day). I would also choose a disk drive that has less rotation speed (because there's less heat fluctuations and vibration). I would also use hard drives with "advanced format", which have 4 KB sectors, therefore internally these drives have much more error correction data so in long term they can self correct data they read for longer time:

 

FESmall.png

 

These drives with 4 KB sectors have bigger data density while having much more error correction information per volume of data and you can buy disks with just one platter and 2 read/write heads when for the same capacity you'd buy disk drives with 2-3 platters (the spec sheets on the mfg sites tell how many disk heads/platters each disk has).

 

Of course, I would also use PAR2 or some additional way to add error correction (rar archives with error recovery volumes for example).

 

See this very informative about these new drives: http://www.anandtech.com/show/2888

 

After backing up data, I would store one or more disks in a separate location, just in case the house burns up or there's some lightning strike/flood.

Edited by mariush
Link to comment
Share on other sites

BYTE Vol 04-06 1979-06 Artificial Intelligence - 288 Pages 184,211,897 bytes

 

Download it here: BYTE Vol 04-06 1979-06 Artificial Intelligence

 

 

Cover

 

post-12606-129557365232_thumb.jpg

 

Index

 

post-12606-129557382142_thumb.jpg

Double page sticking problem in the 159-164 range. Those pages need to be re-scanned.

 

(It's kind of easy to notice when the page size in the PDF is taller and you're reading in side-by-side mode.)

 

 

Anyhow, I think this was the first issue I ever had back in the day.

 

Unfortunately I tossed the issue out.. I saved all the recent ones but this one for some reason... Have to wait for another to pop up to make the corrections.

I can grab those pages when I get back home to Portland. Practically living in my "second home" in Cupertino right now.... :-(

So, I actually have two spare copies of this magazine, it's probably best if I just ship one off to ThumpNugget rather than scan on my crummy scanner again. I'll wait for ThumpNugget to PM me with details of where to ship it too.

 

Cheers!

Link to comment
Share on other sites

Actually, Jaws was a 1975 release.

http://www.imdb.com/title/tt0073195/

 

Back then, there were only two theaters locally, and both only single screens. One of them kept Jaws for the whole summer, with Phantom of the Paradise as the second feature. The repeated exposure to the combination left me twisted for life.

I'd never heard of Phantom of the Paradise until now... and after looking at the pictures google brought up I don't think I missed much.

<edit>

93% on Rotten Tomatoes... seriously??

 

It was a great little B Movie. You don't know what you're missing. That rating is fully earned by those who know.

 

One of my personal favorites:

http://www.youtube.com/watch?v=7Pa56msnwIY&feature=related

Edited by epobirs
Link to comment
Share on other sites

I really have to dispute this. Inexpensive consumer drives are built to standards today the industry couldn't dream of delivering in earlier eras. What was once reserved for the very expensive high end units is now just typical of the stuff on the shelf at Best Buy. I've been involved in data recovery as one of my jobs for nearly twenty years. The rate of drives just up and dying for no apparent reason is almost non-existent compared to what once passed for normal.

 

The thing to keep in mind about that five year old 80 GB drive is that by that time it was a niche market item largely aimed at the corporate desktop market. For these machines the local storage needs are very low as most data is kept on the server(s). This market would be satisfied with 40 GB drives still but the drive makes will only do so much to accommodate this market. (Personally, the only spinning platter drive smaller than 100 GB I still use regularly is an old Apricorn 1.8" USB model that is very handy but quite obsolete.) Drive reliability is also a lesser concern for this market as the effort of replacing a failed drive with a new unit containing the standard image is trivial. So the likes of Del and HP keep those drives around to make the idiot CFOs think they're getting a bargain but nobody else should be bothered.

I have to dispute what you're saying. Find the MTBF on an older drive and a consumer one made today. The consumer drive MTBF isn't the same. The aforementioned Seagate drives have a 1,000,000 hour MTBF. Most consumer drives have a 400,000 hour MTBF with an 'office' duty cycle--not the 24x7 duty cycle testing of the past.

 

When I bought the 80GB drive is was one of the largest capacity USB external drives available, and it was so new that was only available in USB 1.0. This was a top-of-the-line product, on par with other higher-end products of the time.

 

I don't trust any drive from this decade for more than a year or two. I have two Western Digital 1gb drives that were manufactured in 1996 that were recycled after seeing duty in harsh environments, and they still work fine. And I have 4 failed drives from this decade--more than the whole history of my computing with PCs back to 1988. (I've only had one Maxtor lxt213s and two Quantum 4gb Atlas drives fail.)

 

A $60 drive is built better than one that cost $1000? I think things have gotten better, but not enough to warrant a 10x reduction in cost without compromising quality. And this cost reduction doesn't even account for inflation. There's just no way.

 

MTBF is a nearly meaningless factor in most situations. It's something for salesmen to exploit but has little bearing on real life.

 

I prefer real world experience. During the 90s, much of my income was based on the rate of drive failures. You could build a business around it and many did. Today, you'd starve unless you're dealing in the super high-end of data recovery. The drives are manufactured in volumes two orders of magnitude greater than the 90s (they go in so many more places in addition to computers being so much more numerous), hold orders of magnitude more data, cost pennies on the dollar per gigabyte, and yet I'd starve if I depended on drive failures for my living. Data recovery these days is mainly about malware rather than equipment failure.

 

The 500 GB Seagate unit in one of my machines has been running torrents almost 24/7 for three years and hasn't missed a beat. And that is a cheap consumer grade drive made in 2007. I consistently killed off drives from earlier eras with the same task.

 

Have you ever wondered why drives haven't gotten any cheaper for nearly a decade? They've gotten better but the base price has stayed the same. Around the time the first Xbox was being launched I was part of a group of tech writers Seagate brought out to Littleton, Colorado to show off their facility there. There is a big cluster of hard drive development in that area due to IBM placing one of the first drive R&D operations there, with engineers who left to start their own companies staying in the area. Anyway, around the turn of the century, all of the bits that make up a minimal hard drive had been pretty much perfected. The single, one-side platter, single drive heade, read/write channel electronics, and the metal casing had all been cost reduced as much as possible until such time as some new process revolutionizes manufacturing. Nobody is holding their breath on that one.

 

So, since then, just about everything has been refinements on the pinnacle they'd reached around 2001. Reduced process nodes for the chips, faster interfaces, improved densities, all of these made little difference in the cost to produce a minimal drive. One of the places where the companies can still strive for advantage is in manufacturing quality. Reduced defects and improved reliability make for a better net profit. This has had uhge benefits for the consumer. We now get immense amount of highly reliable storage for trivial amounts of money, at least from the perspective of somebody who used to use a hole punch to make double-side floppies out of single-sided floppies.

Link to comment
Share on other sites

I'm sorry but MTBF is irrelevant to this discussion and doesn't really mean anything about the quality of a product.
I have to disagree. Any drive manufactured today can be easily ranked in quality by it's MTBF. Enterprise class drives have a 1.2m MTBF. Consumer drives do not. It's not an indicator of the actual quality of a particular drive, but it does show what class or product it is. And the price tells you something as well.

 

Case in point is the Western Digital RE3 drives I bought. Checking on pricescan shows that a Western Digital Caviar Green 1tb goes for $55. And on the same screen, the RE3 is $130. I don't believe the quality of the Caviar Green is the same as the RE3.

 

Older drives might me a bit more reliable simply because they have less platters inside and less disk heads and the technology inside allowed for the disk heads to stay further away from the surface, preventing "stickiness" or surface scratches due to accidental shocks.

Newer drives have the disk heads much closer to the surface and the data density is bigger (there's more content per square inch of a platter) but keep in mind the quality of construction (precision and accuracy) is much better than in the old days and while the density of the data causes reading errors all the time, the processors on the disk PCB are much more powerful and can recover and correct errors without user noticing.

I completely agree with you. The older drives did have a lot less heat, less 'speed' in moving parts, and much less density to deal with. While it may explain increased reliability, that doesn't excuse modern day products for a lack of reliability if they are supposedly built with the same mindset.

 

It's amazing to see a 1gb and 1tb drive side-by-side in the same form factor. Even though it's been 15 years of technological evolution, 1000x as much data in the same space is still quite a feat.

In my opinion, the best backup solution today would be to get 2 or more disk drives from different series (not made in sequence on the same day). I would also choose a disk drive that has less rotation speed (because there's less heat fluctuations and vibration). I would also use hard drives with "advanced format", which have 4 KB sectors, therefore internally these drives have much more error correction data so in long term they can self correct data they read for longer time:

 

FESmall.png

 

These drives with 4 KB sectors have bigger data density while having much more error correction information per volume of data and you can buy disks with just one platter and 2 read/write heads when for the same capacity you'd buy disk drives with 2-3 platters (the spec sheets on the mfg sites tell how many disk heads/platters each disk has).

 

Of course, I would also use PAR2 or some additional way to add error correction (rar archives with error recovery volumes for example).

 

See this very informative about these new drives: http://www.anandtech.com/show/2888

 

After backing up data, I would store one or more disks in a separate location, just in case the house burns up or there's some lightning strike/flood.

I've read about the 4k sectored drives. There's some incompatability issues currently, but I think that will go away just like the 4gb memory limit as things evolve. My long term plan is to migrate to zfs on two manually mirrored freenas's connected via a vpn. This should decrease the likelihood of data corruption by an order of magnitude. Each freenas shouldn't have corruption issues and each unit can compare with the other to confirm that. Worse case scenario--3 freenas with zfs.
Link to comment
Share on other sites

OK! That is is for the next few weeks.. Feels good to have 1976 completed.. I was going to save the Vol 2-2 until I returned but there was a specific request for it so there was not much point in letting it sit around :)

 

 

Looks like there is some heavy bandwidth happening right now.. The upload looks good.. Just slow right now.. be patient.

 

Okay, it's been two weeks, are you back yet? :D

Link to comment
Share on other sites

OK! That is is for the next few weeks.. Feels good to have 1976 completed.. I was going to save the Vol 2-2 until I returned but there was a specific request for it so there was not much point in letting it sit around :)

 

 

Looks like there is some heavy bandwidth happening right now.. The upload looks good.. Just slow right now.. be patient.

 

Okay, it's been two weeks, are you back yet? :D

 

I believe if you look back he indicated he would be gone for pretty much the whole month. Be patient. The free candy will return someday.

Link to comment
Share on other sites

OK! That is is for the next few weeks.. Feels good to have 1976 completed.. I was going to save the Vol 2-2 until I returned but there was a specific request for it so there was not much point in letting it sit around :)

 

 

Looks like there is some heavy bandwidth happening right now.. The upload looks good.. Just slow right now.. be patient.

 

Okay, it's been two weeks, are you back yet? :D

 

I believe if you look back he indicated he would be gone for pretty much the whole month. Be patient. The free candy will return someday.

 

In the meantime, go and read the mags already posted. And if you read them all, go study the advertisements. And when you're done with THAT, start building some of the hardware projects. That will keep you busy till the next pdf comes out.

Link to comment
Share on other sites

I was hoping that they would talk about the computer onboard NCC-1701-D and Voyager.

Kinda hard to do being that the magazine article predated the next series by ten years and Voyager by 18 years :)

ohh... yes :dunce: :?

 

The Enterprise D and Voyager computer IS in this issue. Well, at least on the cover, though she (AKA Majel Barrett Roddenberry) is wearing red!

 

 

post-12606-129600917098_thumb.jpg

Link to comment
Share on other sites

BYTE Vol 03-01 1978-01 The Brains of Men and Machines - 194 Pages 121,372,411 bytes

 

BYTE Vol 03-01 from January 1978... Reading the article on describing the brain in terms of it being a computer was very interesting. There is even a list of attributes and requirements (energy requirements, capacity, etc) almost like reading the requirements and stats for a video cards :) The two other interesting articles for me is the good review of the SOL-10 since up until now I've only seen pictures and descriptions of the SOL-20. The other article I like? The overview of the Motorola 6800 instruction set.

 

 

Foreground

ADD MORE ZING TO THE COCKTAIL

A FLOPPY DISK INTERFACE

THE WATERLOO RF MODULATOR

MOUNTING A PAPER TAPE READER

 

Background

THE BRAINS OF MEN AND MACHINES: Biological Models for Robotics

THE IRS AND THE COMPUTER ENTREPRENEUR

THE MOTOROLA 6800 INSTRUCTION SET

A USER 'S RE ACTION TO THE SOL-10 COMPUTER

THE SECOND WORLD COMPUTER CHESS CHAMPIONSHIPS

STRUCTURED PROGRAMMING WITH WARNIER-ORR DIAGRAMS: Part 2

SIMULATION OF MOTION : Model Rockets and Other Flying Objects

A NOVICE'S EYE ON COMPUTER ARITHMETIC

NOTES ON BRINGING UP A MICROCOMPUTER

 

Nucleus

In This BYTE

What Is This Phenomenon Personal Computing?

Letters

BYTE's Bits

Book Reviews

Clubs, News letters

Technical Forum: A Note on Advances in Technology

What's New?

Classified Ads

BOMB

Reader Service

 

Download it here: BYTE Vol 03-01 1978-01 The Brains of Men and Machines

 

 

Cover

 

post-12606-129833962929_thumb.jpg

 

Index

 

post-12606-12983396471_thumb.jpg

  • Like 4
Link to comment
Share on other sites

ThumpNugget, thanks for all your hard work in scanning these issues. It's much appreciated!

 

Over your posting hiatus, I managed to catch up skimming through my backlog of downloaded BYTE issues. FYI, BYTE vol 11-03 (the one with the Atari ST on the cover) has stickied pages 65-68.

Link to comment
Share on other sites

What a great article! I think today's so-called AI "experts" should go and read this article. Programming supercomputers based on 0's and 1's that use on/off transistors is a majorly-faulty approach to achieving any form of intelligence; no matter how rudimentary. All you can really accomplish is little more than a huge truth table as output.

 

The type of circuit element needed for true AI applications has not yet been invented. Nor has it been simulated with today's digital logic gates. The component needs needs to handle analog and digital signals in time/space/intensity dimensions, AND in varying quantities of each aspect. Sometimes it might be analog or sometimes digital in nature. Perhaps anywhere in-between the two qualities. It might have a varying frequency or amplitude, and it might take place here or there x,y,z. Moreover, the circuit component we need should be *generally* consistent but yet at the same time it must be required to deviate and make "mistakes". And, depending the behavior of its connections it should be able to make and break those connections seemingly at random. Or at least turn them on and off or strengthen and weaken them. It should also be able to connect itself to other circuit elements from time to time without outside intervention, most of the times. Many of the circuit elements in a single "block" may operate at slightly different speeds, from time to time, and they may vary their output levels, from time to time. Each element must be able to also be re-purposed by itself, or at the direction of other circuits. And last but not least, it should be able to change its own internal logic "truth-table" and sensitivity and power output slightly in most cases, but sometimes radically; all aspects being affected by the environment within which the element resides, its own internal "bios" code, and under direction from outside signaling. The transistors sitting in your core i7 chip are nowhere near any of these requirements. In fact, modern manufacturing processes go to great lengths to ensure there is no variation between processor units, what comes out of one CPU will be exactly the same as another. Zero difference. Even an FPGA with DSP peripherals don't meet all the requirements.

 

As far as programming it, you wouldn't need to. Not in the sense of the traditional computer with ram and hard disk. There is no sequence of instructions that are to be followed. No way! You might do some "bios" low level core programming to get some of the regulatory stuff up and running stable. But beyond that, no. The conglomeration of circuit elements would "program" and arrange themselves over time, by themselves. How they would connect and remember and process things would be a result of the initial layout or schematic of the circuit, but only vaguely, generally, for a little while. After time, one network of elements could develop a radically different configuration. None of our current computer systems allow for that flexibility.

 

On another note: It is also refreshing to read articles that are not constantly quoting other work, the bibliography for this article had only like a few references. That's fantastic! Many papers I read today have a bibliography section equal in length to the original article! Fer'Chrissakes! Talk about cookie-cutter degrees being handed out.. Where's the originality?

Edited by Keatah
  • Like 1
Link to comment
Share on other sites

The type of circuit element needed for true AI applications has not yet been invented.

 

Turing tells us that any two sufficiently complex computers are equivalent. The underlying circuits are irrelevant. Digital circuits can approximate analog circuits to whatever degree of precision you wish to engineer.

Link to comment
Share on other sites

The question I have is whether or not our intelligence is really a computer. It can do what computers do, but it can do a lot of things computers can't do either.

 

A good retro analogy would be the analog pong type games, vs the ones that operated with CPUs to follow. When I first played the old "computer video games", they were all those analog pong type chips. Those things are just latches, states, timers, etc... The circuit literally is the game, but there is no CPU. It just is the game.

 

The VCS can do that game, but it can do other things too, and it's more than just a circuit, in that it's active, where the PONG games are passive.

 

Maybe brains are something like that. I don't mean passive, but just something more than what we know as a computer today. Had Turing not had such a rough time of things, maybe we would know more. Sure is fascinating stuff.

 

I saw some writings about how the new memiresistor (SP) will have AI implications, because it has a state memory, unlike the circuits we use now. We implement state memory in software, which is very complex, and resource intensive, from what little I understand.

 

So, imagine going back to the "PONG" era, only knowing about that circuit, like we do now. What changes? And how does that impact what we can build, and or what Turing would have theorized on it?

Link to comment
Share on other sites

It's cool that the first pong games *were* a result of the circuit pattern - and not software. Those games are closer to the human brain than any modern Core i7 could ever be!

 

Likewise, the VCS TIA chip is much closer to a brain than the biggest and baddest GPU of today.

 

How does emulation change this? Or relate to this?

Edited by Keatah
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...