Jump to content
IGNORED

New GUI for the Atari 8-bit


flashjazzcat

Recommended Posts

I understand the desire to squeeze out every available byte (remember DOS 3 allocation in 1K blocks - you could waste 1000 bytes in a single file! The Horror!), but I suspect manipulations & calculations will be an order of magnitude easier if they are page-aligned - and easier code is cleaner code, and cleaner code is generally faster code.

 

Given that most RAM upgrades are in multiples of 64K, I don't think losing 255 bytes here and there would be too onerous...

Link to comment
Share on other sites

About pointers.....did you intend to support banked mem? I thought you did, but maybe I mis-remember. In that case don't you need at least one level of indirection on the pointers already? Ie., a bank num + offset kind of deal. I mean I didn't think you were going to be handing out bare pointers anyway is my...er lol point.

 

I think the general consensus a while ago was that we shouldn't overload the memory manager with too much abstraction, so yeah: you'd get a memory pointer and a bank number. What might be useful is if the system performed a "best fit" allocation, so maybe the bank with the largest amount of unallocated RAM would be used to satisfy a request (rather than taking an 8KB buffer from a bank half full of small objects). I'll go into detail later about exactly what the remit is here.

 

Why use 16 byte blocks? While memory is tight, wouldn't it be easier to use pages (256 bytes)? You'd require only 8 bytes for the table from each 16K block (leaving another 248 bytes per 16K page for other system uses), and memory manipulation is easier on page boundaries.

 

Or use one byte per page for the table (64 bytes -1 for the first page that's reserved by the system) and use the bits for more information - is the data executable or data? R/W or read only? Or any other information that the OS may find useful in its machinations and manipulations.

 

He's trying to minimize wasted space per allocation chunk...worst case for 16 byte chunk is 15 bytes, worst case for page is 255 bytes. He's considering large objects spanning more than one chunk, so the concern is with the left over in the last chunk.

 

I think going to a 256 byte allocation chunk is a good idea actually...make me map my own objects and the system just manages pages, but thats far afield from where he's at now in his thinking I believe.

 

I understand the desire to squeeze out every available byte (remember DOS 3 allocation in 1K blocks - you could waste 1000 bytes in a single file! The Horror!), but I suspect manipulations & calculations will be an order of magnitude easier if they are page-aligned - and easier code is cleaner code, and cleaner code is generally faster code.

 

Given that most RAM upgrades are in multiples of 64K, I don't think losing 255 bytes here and there would be too onerous...

 

If we went for 256 byte blocks, we'd have to manage some kind of poll or slab allocation for the system, I think.

 

I first modelled the system after TOS. I had an array of objects (maybe consuming a whole 16KB bank), each object being c. 20 bytes long. An object is the basic building block of the system, and every object on the screen (menu bar, window, icon, etc, etc) is described by an object record in the array. Now, some objects (such as windows) require supplementary or object-specific data, and this is referenced by the OB_SPEC field in the object record. Many objects don't require it, but those that do automatically have a supplementary structure allocated from the heap when the object is instanced. So at first I figured: keep all the object records in one bank (an array), and all the OB_SPEC stuff on the heap. Applications will naturally want to allocate their own buffers from the heap, and we could use yet a third bank for this.

 

Problems arise, naturally, if the system runs out of space in the OB_SPEC bank (say, if memory was limited and the application happened to allocate some small buffers there). The whole thing seems limited by these 16KB boundaries. Fonts, icons, text, menu items, even program code needs to reside in these banks.

 

Now, when I decided to have a crack at cooperative multi-tasking (or even simple task switching), the matter becomes even more complicated. 800 system-wide objects in a 16KB array won't be enough, and yet it would be wasteful to allocate each application a 16KB bank of its own for an object array. A calculator, for example, may use a mere few dozen button objects as well as its own window and small menu. To be more flexible, I wondered if we might do away with the object array altogether, and go back to my original arrangement from last year, whereby OBJECT structures were allocated from the heap, just like the OB_SPEC structures and other ancillary data. As long as everything fits into 16KB (minus application-allocated buffers, fonts, etc), we're OK.

 

Note in this scenario, the GUI itself (in the cart space) is able to explicitly bank in memory at $4000-$7FFF to get around the time-consuming indirection that applications in the same space will have to endure. The GUI code can traverse entire object trees without any further bank switching at all. Access to the OB_SPEC data in heap banks is efficiently done with small pieces of wrapper code.

 

But really - this is the most fraught part of the project bar none: envisaging likely memory allocation scenarios and designing a best-fit solution. Inter-bank links in trees were something I wanted to avoid, but on the other hand would it be so hard to manage? Otherwise the 16KB banks seem to me a tremendous limitation, since one must predict - if we are to segregate different banks for different data - the likely balance of objects / supplementary structures / fonts / program code. It's therefore appealing to apply indirection to the entire heap (the most drastic solution), and implement virtual 64KB memory spaces. Memory would be accessed indirectly - even by the GUI itself - with all banking performed transparently. However, I fear this would be eye-wateringly slow; one of the reasons I'm planning such a rich control set is so that the slow indirect memory access from applications won't be such a bottleneck.

 

I tend to think a big compromise is in order here, but whichever way it's done, it has to be pretty smart. Certainly we can afford to allocate banks like crazy on a 1MB machine... perhaps I'm worrying too much about performance on a stock 130XE. It's against my nature to waste RAM, though. ;)

 

The demo is chugging along fine, since it's not yet using extended banks and the object requirements are low. I'm just about to test de-allocation (by implementing window destructors), and at that point the mechanics are all working. We then move the code to a banked cart, open up the banking window at $4000-$7FFF... and then figure out this bloody allocation strategy. ;)

 

...how did you decide to implement a software based protected mode and co-operative multitasking on the lovely (no I didn't say lowly) 6502 of ours (since a hw mode and preemptive multitasking won't be available), to prevent my ill behaved apps which will be specifically designed for your GUI? I won't even mention about thousands of existing Atari software that we love to use which will take over the Atari.

 

Or would you rather have me start a conversation in it's proper thread here where you talk about "garbage collection". I am afraid there would be no garbage to collect when an app takes over the Atari... ahhhh really missed the Windows 1.0 days... ;) ....

 

We have an MMU on the Atari, and if we keep each GUI application in a different extended bank (and each application's data also in different banks), we have some very limited kind of protection, as long as the system manages the bank switching. I don't doubt there's plenty of scope for disaster, though: I'll leave it up to intrepid developers to deal with memory leaks and such like. If your GUI apps are ill-behaved, you'd better take them in hand, my friend. ;)

 

With 1MB upgrades common, it seemed a shame not to have task-switching at the very least, and cooperative multi-tasking is a relatively short step. It's really down to organisation, and that's what I'm trying to thrash out now.

 

As for legacy applications (i.e. existing non-GUI software) - this won't be task-switched or multi-tasked at all. I'm solely interested in being able to run a GUI text editor with the desktop file browser open at the same time, etc.

 

Anyway: I took heart from the Mac's original cooperative multi-tasking implementation when I studied it. I love the way the desktop clock stops dead while you're gripping a scroll-bar handle some place else. ;) This kind of multi-tasking I can really get my head around... :D

Link to comment
Share on other sites

Well, I suggest you compromise all in favor of simplicty and speed for the GUI itself and let the app writers have the short end of the stick :) The programmers here LIKE really hard stuff anyways.

 

For instance :

 

The GUI gets all non-banked memory, except for maybe a 128 or so byte personal scratchpad that all apps get...many apps could get by with just that I bet. For larger requests, banked memory is used. When that comes into play, in my experience you'll have to be marshalling pages back and forth anyway, so you might as well be in page chunks. You can't just flip the bank in and use it because you'll likely be flipping needed memory locations ( perhaps even the currently executing code ) out...unless you've taken pains to exclude stuff from that area, but that gets kind of hard. One memory request would result in a pointer to one page. If the app needs more than one page, it requests more than one time, and it has to keep track of its own pointers. An app would call 'load' on the pointer to move that page from banked to the apps' local non-banked page buffer, then do its thing, and then call 'store' to write it back or 'load' again to get the next one.

 

I guess if you could get away with leaving a big 16k hole in the middle of your footprint, you could assign an entire bank to an app, code and all. Then you just flip banks during your context switch. You could still use the bank marshalling idea above as well to provide even more ( but harder to use ) memory for an app. That 16k hole that gets swapped out could be GUI stuff and code, but you'd have to keep only stuff in there that can be safely swapped out.

 

well, anyways you've already covered this ground I'm sure, so I'll just leave my suggestion that you make it easy and fast for the GUI over the app writers.

Edited by danwinslow
Link to comment
Share on other sites

Yep - I've been planning on a 16KB hole in the middle of the footprint since day one. Apps can take a whole bank, or use some of it for data, and they can reserve other banks too for code and static data. They can dynamically allocate memory from the heap, and shuffle it in and out of the "fast" unbanked area using calls to the GUI (applications will - naturally - never "see" the heap directly, since it's in the same physical space). This has seemed for some time to be the most sensible way to do it.

 

The difficulties of organizing heap banks are kind of another issue, once you get over the fact that the banking window is where application code goes. I guess the actual heap implementation is less important to get right at this stage than the fundamentals of how we're going to delegate extended bank use. Having already swapped out the memory manager once for a totally different implementation, I can do the same later on if we come up with a fancy bitmap indexed / paged system, but fundamental changes to how the object tree, etc, are organized will be more difficult to action further down the line.

 

Anyway: the page marshalling idea is already implemented for OB_SPEC data, so the application can get at it. The structure is loaded from heap memory to a transient "safe" buffer, amended, then written out again. The GUI, meanwhile, can access this data directly, but via wrapper code so that we still avoid a rat's nest of PORTB stores.

 

Good news: the author of the VERY impressive SymbOS (for Z80 machines) appears to have put in an appearance in the comments for one of the window manager videos, so hopefully I'll be able to get an insight into how he managed things on another 8-bit CPU. :)

Edited by flashjazzcat
Link to comment
Share on other sites

...how did you decide to implement a software based protected mode and co-operative multitasking on the lovely (no I didn't say lowly) 6502 of ours (since a hw mode and preemptive multitasking won't be available), to prevent my ill behaved apps which will be specifically designed for your GUI? I won't even mention about thousands of existing Atari software that we love to use which will take over the Atari.

 

All the jokes and teasing aside, I think your biggest handicap will be the "Atari Coding Culture" in a cooperative multitasking environment. Assuming you overcome all the technical difficulties involved, you still have to educate/convince programmers to code within your guidelines, which in my opinion will be a monumental task. A multitasking environment brings a lot of limitations to the programmer, he can no longer take control of the computer, he must code within a well defined perimeter, he can't hog the CPU, he can't use the memory as he wishes, he can not access hw directly, he must learn to share the resources, shortly he must learn to compromise in many respects. But then there is a whole set of considerations for the application/game's performance and in a system like the Atari 8 bit, every available resource is used to the max to get the best possible or sometimes even just acceptable performance. On a computer system with multiple core CPUs and lightning fast system hardware compromising and sharing is relatively easy, on a 6502 based computer however it's a whole different game which i think not many programmers may be ready to play... And you have no other choice but get them to play along. That is if you want your GUI to be a viable/usable operating environment and not just a demo which shows that it can technically be done.....

Edited by atari8warez
Link to comment
Share on other sites

I think your biggest handicap will be the "Atari Coding Culture" in a cooperative multitasking environment. Assuming you overcome all the technical difficulties involved, you still have to educate/convince programmers to code within your guidelines, which in my opinion will be a monumental task. A multitasking environment brings a lot of limitations to the programmer, he can no longer take control of the computer, he must code within a well defined perimeter, he can't hog the CPU, he can't use the memory as he wishes, he can not access hw directly, he must learn to share the resources, shortly he must learn to compromise in many respects.

 

It's certainly true that it will be a culture shift, but then surely this is the whole point - not to mention where all the fun is. Anyone who's used to coding for GEM or Windows (and I think we have a few of them here) will be used to hardware abstraction, but agreed: it's something completely new for most A8 coders. This is why it's vitally important to have a rich API, and I take with a pinch of salt the idea that the lion's share of the headaches should be left in the hands of the programmer. I really want to make the creation of GUI applications relatively intuitive, and I want the developer to feel they can fully express themselves with the available control set. So the unavailability should not feel like a hindrance... more a method of easing development. You want a text editor: stick a mutli-line text control in a window. You want to load files: call the file selector. You want a toolbar: use the toolbar control.

 

Thinking about an application like The Last Word, which is some 30KB in size: how much of this code is devoted to the file manager, the editor engine, the user interface? The answer is A LOT. Replace the file manager with a call to the file selector, the editor engine with a text editor control, and the user interface with dialog boxes and alerts, and (although I'm vague about the exact figures) I think the application will come in at under 16KB, if written in pure assembler. This brings me to another point: I think CC65 support is massively important, since how else will we tempt ST coders to develop for a machine whose assembly language might be totally unfamiliar to them?

 

I can only speak for myself, as the person who wrote the demo application (not to mention the GUI itself): I do NOT come from a Windows coding background, and yet I'm finding that coding within a strict, hardware abstracted framework on the A8 is a LOT of fun and a welcome change.

 

But then there is a whole set of considerations for the application/game's performance and in a system like the Atari 8 bit, every available resource is used to the max to get the best possible or sometimes even just acceptable performance. On a computer system with multiple core CPUs and lightning fast system hardware compromising and sharing is relatively easy, on a 6502 based computer however it's a whole different game which i think not many programmers may be ready to play... And you have no other choice but get them to play along. That is if you want your GUI to be a viable/usable operating environment and not just a demo which shows that it can be done.....

 

Well, GUI applications are waiting for events most of the time, and if an application has no events waiting for it, we can shoot off and process something else for a short while. Like what? A full-screen video player? I think not. How about a desktop clock or a timer event. Really, the multi-tasking I have in mind might as well be simple task-switching, but it's such a short step to also process background tasks, I think we might as well go for it. But this is why I threw out ideas of pre-emptive tasking. Sure it's disconcerting that - say - when you drag a slider in Mac System 7, the desktop clock stops dead until you release the mouse button... but you can also see why. That foreground task has 100 per cent of the CPU time. Is this limiting? Sure it is. Is it "true" multitasking as we know it today? Certainly not. Will something like this do for a 6502 machine, when most of the time all you want to do is switch the focus from one application to another? I think it will.

 

We talked before in these pages about the dreaded "demo" effect, and this applies to plenty of GUIs which have gone before. I covered in my blog a number of "desktop metaphors" for the Atari, just about all of which are no more than curiosities. I struggle to believe that more than a couple of die-hard fanatics are using them for daily productivity purposes on their A8. We know this because they are rarely mentioned on the forums, other than in terms of periodic questions like "Was there ever a GUI for the Atari 8-bit?". Helpful folks will list Diamond, ATOS, etc, etc.

 

Of course, almost every GUI has suffered either from a total lack of an API, or an API which was uninspiring and severely limited. Couple these issues with poor performance, poor documentation, and you have a nice demo on your hands. Even though we're already doing things GEOS did not do on the C64, GEOS remains the absolute benchmark for any 6502 GUI. GEOS's API, documentation, presentation, and bundled / third-party applications are without equal. I see nothing even close to it on the A8, frankly, when judged as a complete package. Read the C64 GEOS developer documentation, and you can see why so many third-party GEOS applications were written.

 

My job is to develop the framework and core applications, and ensure that the system can run programs responsively. Personally - as someone who's coded in 6502 using direct hardware access on the A8 99 per cent of his coding life - I can't wait to start developing for this thing. Who knows if other A8 coders are gonna "play along"? The community in general - via nearly 90,000 topic views - would certainly flatter to deceive if they don't want to play along. How do I get them to play along? By writing some applications which show that this is a dynamic and responsive system with rich development opportunities, and providing excellent documentation. I can do no more than that.

 

So yeah: what you're saying is probably true, but where does the conversation go from here?

Edited by flashjazzcat
Link to comment
Share on other sites

I think you will have plenty of people ready and willing to embrace the limitations. Sure, the zen assembler monks here may not be very interested, but I think you will have a ton of people who are. Myself of course, for one. I would like to help with the application API itself, actually.

Edited by danwinslow
Link to comment
Share on other sites

 

So yeah: what you're saying is probably true, but where does the conversation go from here?

 

Well maybe you can tell me if you have actually discussed this with potential developers and see what they think and how willing they are to contribute beyond an enthousiastic "Wow". In other words do you have an idea about the size of your potential audience. What do you think of Game programmers, will they embrace the thought of working within a GUI environment. If I was to program for your GUI I would probably think about apps, but may have second thoughts about games....

Link to comment
Share on other sites

Well maybe you can tell me if you have actually discussed this with potential developers and see what they think and how willing they are to contribute beyond an enthousiastic "Wow".

 

We've seen interest from potential developers (of which Dan is one) in this very thread. However, none of them have signed contracts so I can only take the general enthusiasm at face value.

 

In other words do you have an idea about the size of your potential audience. What do you think of Game programmers, will they embrace the thought of working within a GUI environment. If I was to program for your GUI I would probably think about apps, but may have second thoughts about games....

 

I think the user audience is large (again - I can only judge based on interest shown here). As for development: I didn't envisage it as a game platform (it's black and white, for a start), although a few people apparently want to write games for it. I would imagine, though, that in the lion's share of instances, games would be better written stand-alone and launched from the GUI as legacy apps. Nevertheless, the old mono Mac had a fair few games for it. Nothing stopping you going full screen in a custom graphics mode if you want, and just using the UI for the high-score table.

 

Really: the size of the potential audience is academic at the end of the day. I'd like to encourage development, but this isn't a commercial enterprise, and non-one is forcing me to write it. It is what it is... or what it will be.

Link to comment
Share on other sites

Look at the complexity afforded to us by SDX. Coders are coping with that quite nicely - I don't see this as an issue.

 

I was more concerned with the limitations of a GUI from a game programmer's perspective then the compexity. When I say game, I don't mean the A8 version of Reversi of course, games with intense graphics/memory and CPU requirements.

Link to comment
Share on other sites

Really: the size of the potential audience is academic at the end of the day. I'd like to encourage development, but this isn't a commercial enterprise, and non-one is forcing me to write it. It is what it is... or what it will be.

 

Of course no one does and of course it's not commercial, the reason I asked these questions is to understand what motivated you to undertake such a long/complex project. I for one would have liked my software to be put in good use, what motivates me (whether be a software or hardware project) is to see it used and to generate some productivity. This may be because I spent 30+ years programming in commercial environments.

 

For example:

When I was in my 20's (that was in late 70s), I was in my own consulting business and had a few clients, I remember one of them, a Pharmaceutical Company was outsourcing their Data Processing work to IBM as they didn't have computers in-house. They were entering their data using an IBM 3741 Programmable Workstation and were sending the diskettes to IBM for processing. I (and my business partner at the time) came with this idea of writing the necessary programs on this data entry station (as it was programmable) to process the data in-house instead of sending them to IBM. So I started learning the programming language of that machine (ACL) which only had 8KB of RAM, two diskette drives and a 4 line green display, I don't even remember what kind of processor it had but just imagine a data entry station in the 70s. We spent months to develop all necessary business apps on that measly "computer" and started running company's business. We also had a matrix printer for paper output. I remember there were many occasions I had to use the screen buffer to store some of the program code or data because I was running out of memory, and it was hilarious to see data flying by on that 4 line green screen.... but at the end we did it and they used this system for a few years until they acquired an IBM S/34. So to make the long story short I was very satisfied to see the end result of my hard work and that's what motivated me to spend countless hours in front of that workstation, more so than the money I was making from that consulting work. Achieving something difficult and seeing it's been put to some good use.

My next satisfaction was to build a whole IT department from ground-up including acquiring every needed hardware/staff and writing/implementing every application for a distributed on-line system a few years later.

 

My first involvement with an Atari was in the mid 80's and even though I loved the machine, I have never been as motivated to develop for it mainly because I was too busy working and also the home/game computer image of it made me believe that it wouldn't be taken seriously.

 

These days as I no longer work to make money my interest in Ataris were rekindled and whatever I am doing softwarewise I am doing mostly for self enjoyment and to supplement the hardware that I am building.

 

So what's your short story Jon?....

 

EDIT: Actually reading the very first post of this thread gave me an idea about what motivated you but feel free to add to that if you wish :)

Edited by atari8warez
Link to comment
Share on other sites

I was more concerned with the limitations of a GUI from a game programmer's perspective then the compexity. When I say game, I don't mean the A8 version of Reversi of course, games with intense graphics/memory and CPU requirements.

 

Seriously, who's going to want to write a graphically intensive, full-colour game using a black and white GUI framework? It's obvious that they won't. You might as well ask how you're going to encourage programmers to write complex video editing applications using DOS 6.0. You're taking the single most extreme example of something the GUI will suck at (running a full colour game) and asking me how I'll encourage programmers to deal with that. The answer is I won't, because I don't care. This perceived problem is a complete non-issue.

 

Really: the size of the potential audience is academic at the end of the day. I'd like to encourage development, but this isn't a commercial enterprise, and non-one is forcing me to write it. It is what it is... or what it will be.

 

Of course no one does and of course it's not commercial, the reason I asked these questions is to understand what motivated you to undertake such a long/complex project. I for one would have liked my software to be put in good use, what motivates me (whether be a software or hardware project) is to see it used and to generate some productivity. This may be because I spent 30+ years programming in commercial environments.

 

What motivates me at the moment is seeing a GUI take shape on the machine. It's a creative endeavour. I also believe it will be put to good use if it's good enough. So I'm also motivated to make it extremely good. I enjoy writing it... there are interesting intellectual challenges at every turn...

 

Why do I play the guitar although I'm not in a band? Why did I used to spend hours on end drawing portraits when I was younger?

 

So what's your short story Jon?....

 

EDIT: Actually reading the very first post of this thread gave me an idea about what motivated you but feel free to add to that if you wish :)

 

"Why is there no 'GEOS' for the A8?. Can it be done - or perhaps something better? What challenges must we overcome? How do GUIs work on other systems? What is the history of the GUI? What are right-threaded binary trees? What data structures should we uses? How does multi-tasking work? What can we learn from implementing a GUI on the Atari 8-bit? What will it look like? What kind of programs shall we write for it? What kind of programs will other people write for it? Will it in any way make the machine more appealing / accessible to the younger generation?"

 

All questions I've asked myself, or which have been posed on these pages. Finding the answers is a fascinating journey and a great learning process. I don't know what further motivation one needs.

  • Like 2
Link to comment
Share on other sites

LOL!!!

 

It's art. One does it because it's gratifying and beautiful. To me, the idea of a reasonable GUI on a machine with serious limits is just ballsy as hell. Besides, the skills needed to make it actually work well are applicable all over the place in embedded land. Worth it on that basis alone. Had I the skill, I would totally enjoy that kind of challenge, and it would pay off in other ways too.

  • Like 1
Link to comment
Share on other sites

All questions I've asked myself, or which have been posed on these pages. Finding the answers is a fascinating journey and a great learning process. I don't know what further motivation one needs.

 

 

I think that depends on the person as I tried to explain earlier, but I understand your point of view and that's fine. I wish you all the success and will be waiting to see your results.. When you publish your APIs I may as well be interested to contribute in my own small way. Good luck and have fun...

Edited by atari8warez
Link to comment
Share on other sites

LOL!!!

 

Had I the skill, I would totally enjoy that kind of challenge, and it would pay off in other ways too.

 

 

Yeah when one lacks the skills the idea is "oh so fascinating"....When you have the skills other considerations will also play a role deciding how and where to use them. Thus my questions to FJC....

Link to comment
Share on other sites

Delighted to report that Jörn Mika - the author of SymbOS - approached me by email the other day, expressing interest in the Atari 8-bit GUI after seeing some of the YouTube videos and reading about the project in an article in the German retro-computing magazine "Return" (I haven't seen the article).

 

SymbOS is an incredibly impressive graphical OS for Z80 platforms which includes all kinds of wonderful things like pre-emptive multi-tasking and FAT32 support. Jörn has apparently been out of the 8-bit scene for a few years, but is now back and catching up on the latest developments. I've run SymbOS in emulation before, and the presentation and documentation quality is right up there with GEOS (and the system itself is much more powerful). It soon becomes apparent that Jörn really knows what he's doing and I think his insights will be a valuable aid to the success of the A8 project.

 

Jörn's GUI uses paged memory allocation (and bitmap fields), and doing some kind of pool-based sub-allocation on 256 byte blocks seems quite a sensible proposition. We could do some kind of "slab" allocation, my take on this being that the first object allocation of a given class would cause an entire page of memory to be set up as an array for that particular object class. We could do linear searches of the arrays for de-allocated slots, and release the blocks once the arrays were empty. Bulk de-allocation of small objects would be very efficient (just release all the blocks occupied by the linked list of objects), and if we include a process owner ID in each block, the system can get rid of any "stranded" blocks when an application quits.

 

Anyway - it seems to be research, research and more research at the moment. :)

Edited by flashjazzcat
  • Like 3
Link to comment
Share on other sites

Re: skill

 

Yeah, I'm not there on programming, though I have written some nice programs.

 

However, I have other skills, and have done projects like this because I thought it was cool.

 

I was entirely serious about it being a form of art, and that is worth doing for it's own reasons.

  • Like 2
Link to comment
Share on other sites

I was entirely serious about it being a form of art, and that is worth doing for it's own reasons.

 

I'm entirely serious about agreeing with you. I tried to study core sciences after leaving school, and they weren't for me (I had some crazy idea about wanting to be an architect). I ended up focusing on music, English literature, computing, and the visual arts, since that's what I loved. And in some bizarre way, they are all linked. The design, aesthetic, human interaction, problem-solving and compositional aspects of coding make - for me - the fundamental elements of maths and logic into artistic and creative pursuits. There's a reason why there's a book called "The Art of Programming". :)

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...