Jump to content
IGNORED

Bing Co-Pilot doesn't quite get it right.


Recommended Posts

I thought I would ask the following question (co-pilot is supposed to be the integrated AI):-

show me a program example that uses antic extended mode for Atari 130xe

 

Needless to say the program crashed, it appears Co-Pilot doesn't know the BASIC language on the Atari.

I've highlighted one major error, Atari BASIC doesn't do bitwise OR, so the result of that line is POKE 54017,1  :)

Also as far as I know the PORTB bits have to be cleared to enable the feature so it should be something like

POKE 54017,PEEK(54017)-32

 

One other thing, if ANTIC is looking at the windows at 16384 bank, then poking "A"'s into the window with the CPU

will not put anything on screen as the CPU is using main memory

 

Line 30 does not work, there is no display list at that address, it should be changing the address in the actual screen in display list

pointed to by 560 and 561 

 

The program really is total rubbish :(

 

This is what I got:-

10 REM Set up Antic to use extended memory
20 POKE 54017,PEEK(54017) OR 32: REM Set bit 5 of PORTB to enable Antic access to extended memory
30 POKE 560,0: POKE 561,64: REM Set up display list to point to extended memory 
40 POKE 559,34: REM Enable Antic DMA
50 FOR I=16384 TO 16384+799: POKE I,65: NEXT I: REM Fill screen with 'A'
60 GOTO 60: REM Loop forever
 

Would be interesting to see if other AI's produce the same :)

 

Edited by TGB1718
  • Like 1
Link to comment
Share on other sites

28 minutes ago, Stephen said:

What a surprise - "AI" not getting something correct.  My god I hope this fad dies faster than the pet rock did.

True, I only did it for a giggle, I was surprised when it came up with something at all, but the result itself was

more than absolute trash.

 

Still it was "BING" what else would you expect from Micro$oft

  • Haha 1
Link to comment
Share on other sites

It's doing fancy algorithmic autocomplete from a very large training set of data. Our obscure hobby is not popular enough to put useful weights in the data set.

 

I've found Bing Chat to be pretty useful when wanting to think through algorithms, where my loops are broken, etc. Because it ate all of StackExchange and zillions of people have done those things before, it's able to give me some useful answers without a very precise search.

  • Like 2
Link to comment
Share on other sites

AI as they like to call it will 'find' what they would call 'useful' to their cause answers. It will give workable answers to those systems, platforms, and items that they have given it a proclivity to select. It will make of mess of things they deem unworthy or wish to go away.

It's going to follow silly trends and 'viral' ascension spires. For these very reasons it will be of limited utility to those who truly wish to create or those that actually have a mind of their own.

Link to comment
Share on other sites

3 hours ago, _The Doctor__ said:

AI as they like to call it will 'find' what they would call 'useful' to their cause answers. It will give workable answers to those systems, platforms, and items that they have given it a proclivity to select. It will make of mess of things they deem unworthy or wish to go away.

It's going to follow silly trends and 'viral' ascension spires. For these very reasons it will be of limited utility to those who truly wish to create or those that actually have a mind of their own.

But seeing all the art that has currently been stolen and all of the gibberish that it has created, we are already at a big net loss.  I'm beyond sick of hearing about it, sick of people telling me I'll be out of a job coding because of it, sick of managers and people that know nothing about buzzwords asking us how we can implement it in our product because it's the next big thing, etc.

  • Like 1
Link to comment
Share on other sites

4 hours ago, Atari8guy said:

All of the criticisms are valid and true and today's AI sucks, but you can kind of see where they might be able to get to.  "might" being the operative word.

To me, that's like saying if I burn a million acres of farm land, I may be able to better cultivate 5 acres.  It's not even risk/reward.  It's gain/loss and the balance won't swing positive - at least in my lifetime.

Link to comment
Share on other sites

10 hours ago, Stephen said:

To me, that's like saying if I burn a million acres of farm land, I may be able to better cultivate 5 acres.  It's not even risk/reward.  It's gain/loss and the balance won't swing positive - at least in my lifetime.

I agree they are likely farther away than most people think.  General AI being really far away still.  

 

I acknowledge though, that I am not an expert.

Link to comment
Share on other sites

Somebody will figure out uses for it. It won't be what people think though. Now computers, those are going to take all the jobs from us. My son just got some Atari computer and it scares me. People are saying they are going to do everything. I am worried about my ditch digging job. Do you think they will do that as well? What will I do for work?

Link to comment
Share on other sites

It would be unfair to disqualify all artificial intelligence models because of the defects of just one. Copilot is definitely not the best - in fact, it prefers to perform Internet searches rather than execute specific commands. As a ChatGPT 4 user, I could try "training" a GPT with a manual on BASIC or assembler. For that, text that the AI can "read" would be necessary - that is, not just scanned images; but in OCR format.

  • Like 1
Link to comment
Share on other sites

If this was a true AI, I don't think it would produce code like this with all of the errors in it.

 as @Yautja says "it prefers to perform Internet searches rather than execute specific commands. "

 

I think the real problem is that it finds an example but doesn't check the accuracy of the data, just posts it.

 

A real AI would try find several candidates, cross reference them and hopefully create it's own working code 

from those.

 

I mean, it hasn't even got the Atari BASIC syntax correct.

Link to comment
Share on other sites

As Flojomojo says, it's a large language model. It consists of a complex multilayered set of arrays of probability weights connecting words, which take the words in your input and cross-reference their value and sequence. In terms of Bing Chat, it may also take the output of a Web search as further input.

 

It doesn't work with Atari BASIC because there is proportionally so little Atari BASIC content online that it gets probabilistically "crowded out" by the syntax and details of other forms of BASIC in articles and books that include the word "Atari." As yautja says, you would probably get better results by aggressively seeding the LLM with books and manuals specifically for Atari Basic and making sure it gives those materials heavy weight, which OpenAI sells as their "custom GPT" product. I'd try that, but I don't feel like paying for it!

  • Like 2
Link to comment
Share on other sites

yes crowded out, happens even against popular things when a campaign against it and for another widget is done. The output is always skewed in such models. It simply isn't intelligent, it only follows, it does not lead, it does not think. It is a popularity difference engine that leads to viral outputs. The dumbest things on the planet come from this model not only in electronic computation but as seen in recent years, organics as well. How do so many people and kids fall prey to the so called 'challenges' that harm themselves. They don't think, they just ride the wave to their own demise. The popular model feeds itself making things more popular than they should be as it makes what it sees as a possible trend to become the trend. If enough people claim the earth is flat, AI will eventually say it is. We already have people who believe this is the case. This is defective reasoning and thinking, the stuff of fiction. It is built on faulty logic, and does not have the ability to reason. Not unlike those creating it.

  • Like 1
Link to comment
Share on other sites

I think that, at the moment, it is unlikely that an artificial intelligence could write executable code for a system like the Atari 8-bits. Instead, perhaps it can be useful to streamline the "translation" of code from one system to another (such as from ZX Spectrum to A8). I am not a programmer, so I am only speculating on the potential usefulness of this tool.

Link to comment
Share on other sites

28 minutes ago, _The Doctor__ said:

The dumbest things on the planet come from this model not only in electronic computation but as seen in recent years, organics as well. How do so many people and kids fall prey to the so called 'challenges' that harm themselves. They don't think, they just ride the wave to their own demise. The popular model feeds itself making things more popular than they should be as it makes what it sees as a possible trend to become the trend. If enough people claim the earth is flat, AI will eventually say it is. We already have people who believe this is the case. This is defective reasoning and thinking, the stuff of fiction.

Even worse, riding the wave into hateful ideology. I think we need to worry less about Skynet taking over, and more about the way we think of other people. Why do we so easily allow others to influence our perception of other groups of people? Even groups we have never met in real life. It has been my experience that people have MUCH more in common than they have differences. Maybe it's a just a human flaw that we cannot correct?

  • Like 2
Link to comment
Share on other sites

Most people get along just fine, it's when the powers that be pit one another against each other for power that things go wrong. It's almost always the 'I will unite the people' crowd that deliberately make sure they won't in policy. If you want to make it about ideology, look no further than the current mess. I find it interesting the turn made so that the discussion would be shifted into talk of 'hateful ideology' of which was not the discussion. Maybe someone has politics on the mind. Let's consider what people see when something is going on around them and make the determination about how the group they are observing is. I think that is prudent. Pretending otherwise is done at your peril where most of us have grown up. Someone who hasn't lived it might consider what they perceive based on reports and platitudes more truthful than what the reality is. So let's just stick to the problem of A.I. and the topic at hand.

Link to comment
Share on other sites

Not paying attention to what is developing in the area of A.I. (such as they are calling it) is how it can go wrong very quickly. People already use it to cheat on term papers, it already makes terrible statements, it also makes terrible pseudo Atari 8 bit programs etc. I think we need to worry about it enough to get a handle on it, before it becomes something more ugly than it already is.

Edited by _The Doctor__
Link to comment
Share on other sites

33 minutes ago, Houdini said:

If enough people claim the earth is flat, AI will eventually say it is.

And if enough people make false statements about others enough times, the AI will eventually say that is true as well. Imagine the effect that will have on future young students' perceptions as they will surely be utilizing AI in the classroom.

  • Like 1
Link to comment
Share on other sites

That's already the case in a great number of areas, and the more people wish to make that a part of the model and make broad statements as many times as possible, the more likely it will trend... much like it already has in those other areas. I'd prefer to go about life based on first hand experience. I don't know the next mythical group or class one will be defending but maybe someone should teach the model to just avoid the topic of classification of people, period. For those who insist, maybe it should just stick to statistics per capita within a classification and percentages of affectation of the whole of said group when dealing with disease, crime, illness, etc. That might be useful. For now the model being taken is not very good. I still get a kick out of it being referred to as artificial intelligence.

Edited by _The Doctor__
Link to comment
Share on other sites

1 hour ago, _The Doctor__ said:

Not paying attention to what is developing in the area of A.I. (such as they are calling it) is how it can go wrong very quickly. People already use it to cheat on term papers, it already makes terrible statements, it also makes terrible pseudo Atari 8 bit programs etc. I think we need to worry about it enough to get a handle on it, before it becomes something more ugly than it already is.

An interesting use for AI is to detect AI authored papers. Interestingly enough, essays written by younger students who lack cohesive writings skills often flag as AI authored.

  • Haha 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...