Jump to content
IGNORED

Using a Commodore 64 to teach kids


jbanes

Recommended Posts

This blog post suggests that the Commodore 64 is a superior machine for teaching children. Especially when it comes to the matter of programming, as kids apparently get lost in modern development languages/environments.

 

Were the vintage machines better, or do we just have fond memories that we want to pass onto our kids? I've heard some people suggest that Python can take the place of old-style BASIC, as it has an immediate mode interpreter. Others suggest that structured programming in Logo is the way to go. What are your thoughts?

Link to comment
Share on other sites

From experience, I'd say there are no bad programming languages, just bad teachers. Even with a remarkably easy-to-learn language, an incompetent teacher or badly-written documentation will totally ruin the learning experience, and negatively taint the student's perception of said programming language. Likewise, a good teacher can get anyone interested in coding (even in machine language!) if the material is communicated in a structured and stimulating way. Whether this knowledge sticks in the long term is another matter, but that's true of any learning experience.

 

Also, I believe using old computers to teach programming is not really a good idea, even if such a strategy can produce positive results. If programming is too complicated for kids on today's computers, it's simply because 1) the software industry has forgotten about kids, and caters more to teenage students and adults, towards a professional approach to programming, and 2) there are software products out there that try to cater to younger kids, but they're lost in a sea of products that are badly served by fragmented distribution networks where the products never manage to reach the majority of the intended audience. In other words, you really have to look hard to find something that will fit kids' basic curiosity about computer programming, and hold their interest.

 

The appeal of the C64 lies in its almost toy-like appearance, which is reassuring for kids who seek to be introduced to programming on a small scale. However, good ol' Basic just isn't the way to go anymore. Visual Basic is better on some levels, but it's too advanced for most kids. And then there are applications like Game Maker that try to maintain an easy learning curve based on object-oriented programming, but quickly become oversaturated with "requested features" from those who adopt it and use it on a regular basis (and these users are usually enthousiastic teenagers and young adults, not kids). There's definately an untapped market out there, which used to be handled by home computers of the eighties, and it needs to be recaptured again, so that young kids can catch the programming bug (pun intended). :)

Link to comment
Share on other sites

I agree with you on the teaching part. I didn't have a good experience when I wanted to learn C++ in highschool. The teacher was probably teaching C++ to about 85% of the class who had already taken, and passed, Pascal. It was really hard to understand what the teacher was talking about since I had no formal introduction to any sort of programming language. I failed the class which was the only class I failed K-12.

Yeah so I agree with you.

Link to comment
Share on other sites

Yeah, I took C++ from a teacher who had been teaching Pascal all his life. He was a great teacher with Pascal but unfortunately didn't know too much about C++ and it made it a little hard for me and the rest of the class.

 

I'd say start kids off at whatever is hot at the time. I picked up Basic on my own with no teacher as it was the one of the only languages that was readily available.

Link to comment
Share on other sites

This spring I taught BASIC programming on Commodore 64s to 8 kids from my homeschooling group. I blogged a bit about it here and here. It was discussed at some length over at Lemon64 too.

 

I'll shamelessly copy & paste my response from over at Lemon64 about why I didn't e.g. teach the kids Java instead:

 

1. I don't know Java :) Though I'm being paid for C++ code at the moment, so that's a possibility...

 

2. I do own exactly 4 Java capable machines, but they're quite needed at home - no way I'm hauling those back and forth to the classroom every week! And I'm not about to start asking parents to haul their computers every week either. I've got C64s to spare, and can leave them set up full-time in the classroom.

 

3. The 40 column blue screen, the full-screen editor, the PETSCII graphics and 16 easily chooseable colours are absolute magic for kids. I get them to draw a big smiley face as one of the first exercises, and they love it.

 

4. Immediate mode - being able to ?5+5 and get the feedback right away is brilliant. Then the idea of putting step numbers (line numbers) in front of commands makes a whole lot of sense to them - you're telling the computer you're making a list of instructions for it to RUN later.

 

5. Age appropriate/accessible - maybe I'm hanging around the wrong crowds, but I haven't met any 10 year olds teaching themselves Java. However, I knew dozens of kids teaching themselves BASIC back in '84. Now I realize there's a whole bunch of variables involved here, but I think just the general accessibility (in many senses of the word) of C= BASIC is a factor.

 

6. Principles - I'm interested in teaching general principles that can be applied. Things like IF/THEN, FOR/NEXT, variables and so forth can be applied to any language. It's probably 15 years before many of these kids will be looking for a real job - is teaching Java really going to be that much more useful to them that long for now? Besides, I'm not opposed to them learning more languages in the future.

 

7. Machine architecture - being an assembly programmer, I appreciate being "close to the metal" and there's no way the students are going to get a sense of that in Java. On the C-64, they're just a POKE away from a whole lot of experimenting that can be very educational.

 

8. Appreciating what goes on "behind the scenes". Someone who has never programmed in a procedural language won't appreciate what's going on when they're OOPing away. As a result, they could be creating extremely inefficient code (which still matters in some fields, like game programming).

 

9. I like the C64 an awful lot, and I'm happy to give it a bit more exposure :)

Link to comment
Share on other sites

In defense of the C-64, it's a simple enough machine that you can understand the hardware thoroughly. The modern languages are so abstracted that programmers, ironically, often know little or nothing about hardware. I've complained to my supervisors more than once that I'm tired of explaining to programmers how computers work.

 

The downside to programming a C-64? Well, I was an ace C-64 programmer in my day, but all that object-oriented stuff messes me up. Could be I never had a good teacher. But what the 64 taught me about how computers work served me well. Once I knew how to fix a 64, I got a couple of broken PC/XTs for free or cheap, figured out how to fix those, and not long after that, I had a career.

Link to comment
Share on other sites

I'll shamelessly copy & paste my response from over at Lemon64 about why I didn't e.g. teach the kids Java instead:

...

Teach kids Java???? Perish the thought man!!! :-o

 

I'm 34 years old, I'm just starting out with Java at the office, and I don't think I'll ever fall in love with it. Of course, I've always had issues with OOP to begin with, and Java doesn't begin to solve these issues, so I can't say I'm an unbiased observer. Still, you have to admit, having "kids" and "Java" in the same sentence is beyond strange... :D

Link to comment
Share on other sites

I think teaching kids anything about computers is an excellent proposition. I also agree that the C= 64, and BASIC, are amenable to younger children's learning.

 

The only drawback is that the knowledge is less likely to draw the kids into a love of computing, because the opportunities to apply that knowledge are limited outside the classroom. Back in the 80's when young kids were teaching themselves BASIC it was possible to get exposure to new ideas from others in teh community (i.e. friends, magazines, etc.). It's part of what made me want to learn it so much when I was a kid, the ability to show it to my friends and to see what they were doing. You just can't do that today with BASIC (or at least, not many 10 year olds can).

 

Still, as was pointed out above, the basic premises do carry over. Hopefully, the spark of enjoyment will cause the youngsters to continue their programming education.

 

Good for you, MacbthPSW. :)

Link to comment
Share on other sites

PixelBoy: I'm not really sure I understand your argument. You're basically saying that programming on old computers is bad because they're old? I'm not sure that your logic follows. A large majority of today's exceptional programmers grew up programming BASIC on these computers. As Macbeth said, the principles are the same, and you have such utter control over the machine that it makes children feel powerful.

 

As far as I'm concerned, positive results are positive results. You're not going to damage the child by starting them with something they can understand young, so why not? :)

 

 

Macbeth: That's some great stuff! People wonder why homeschoolers do so much better academically; well here's the reason. You just can't get that kind of loving attention in public school. I especially love the kid who memorized half the manual. That was me as a kid. :lol:

 

If I understood you right, you worked on the C64DTV with Jeri, right? What do you think of the idea of producing an educational C64? I mean, you're obviously having good luck using it with homeschoolers, so is there any reason why the concept couldn't be productized as the article suggested?

 

 

Everyone else: I don't really understand the adversion to OOP coding. It's pretty easy to wrap your head around once you've been exposed to C. Take a simple lightbulb control program:

 

#define ON  1
#define OFF 0

unsigned int address;
char state = OFF; 

char get_light_bulb_state()
{
return state;
}

void set_light_bulb_state(char newstate)
{
outp(address, newstate);
state = newstate;
}

 

A pretty simple and elegant program, right? (Forgive me if there are any errors. My C is a little rusty.) Now what if you had 50 lightbulbs attached that you needed to control? Well, you could run 50 different copies of the program, or your could do something like this:

 

#define ON  1
#define OFF 0

unsigned int address[50];
char state[] = {OFF, OFF, (about 48 more OFFs go here)};

char get_light_bulb_state(int bulb)
{
return state[bulb];
}

void set_light_bulb_state(int bulb, char newstate)
{
outp(address[bulb], newstate);
state[bulb] = newstate;
}

 

That works, but it kind of destroys the elegence of our program. It's also much harder to handle an arbitraty number of bulbs. We'd need to develop more complex data structures to handle things in a truly arbitrary fashion. So instead, let's think about the idea of running 50 separate instances of the program. The only real reason why we can't do that inside the program, is that we have global variables. Any change to address or state would clobber the value for the other bulbs. But what if the variables could be global for the parts that needed them, but hidden for everything else?

 

Here's the same program, Java OOP style:

 

public class LightBulb
{
//These are final, so they're like constants. They can't be modified.
public static final int ON  = 1;
public static final int OFF = 0;

public int address;
public int state = OFF; 

public int getState()
{
	return state;
}

public void setState(int newstate)
{
	outp(address, newstate);
	state = newstate;
}
}

 

You'll note that the program is the same as before, expect that it's all compacted into an object known as a "class". That allows us to push the array up, like this:

 

LightBulb[] bulb = new LightBulb[50]; //Create an array with 50 slots

for(int i=0; i<50; i++)
{
bulb = new LightBulb(); //Create a copy of the lightbulb program
bulb.address = /* whatever the source for the port address is */;
}

//Turn bulb 7 on
bulb[7].setState(bulb.ON);

 

See how tidy that is? Much better than having to make arrays out of every little variable. But wait, it gets better! What if you don't want junior "I'm God's gift to programming, even though I don't know what the F*** I'm doing" programmer over there mucking around with the lightbulb addresses at runtime? He might introduce a bug that would be really hard to find. Or worse, he might manually change the state variable, setting it out of sync with the lightbulb! That would be a pain in the rear to debug.

 

Well, then you can use constructors. Constructors are just init() routines that setup all the psudeo-global data that you don't want external code to modify. For example:

 

public class LightBulb
{
public static final int ON  = 1;
public static final int OFF = 0;

//No one can access these outside of the code inside this class.
private int address; 
private int state = OFF; 

public LightBulb(int defaultaddress)
{
	address = defaultaddress;
}

public int getState()
{
	return state;
}

public void setState(int newstate)
{
	outp(address, newstate);
	state = newstate;
}
}

 

Now we can save ourselves a line of code:

 

LightBulb[] bulb = new LightBulb[50]; //Create an array with 50 slots

for(int i=0; i<50; i++)
{
bulb = new LightBulb(/* whatever the source for the port address is */);
}

//Turn bulb 7 on
bulb[7].setState(bulb.ON);

 

Easy! :)

Link to comment
Share on other sites

The only drawback is that the knowledge is less likely to draw the kids into a love of computing, because the opportunities to apply that knowledge are limited outside the classroom. Back in the 80's when young kids were teaching themselves BASIC it was possible to get exposure to new ideas from others in teh community (i.e. friends, magazines, etc.). It's part of what made me want to learn it so much when I was a kid, the ability to show it to my friends and to see what they were doing. You just can't do that today with BASIC (or at least, not many 10 year olds can).

I agree. It'll probably remain a dream, but I sometimes think how cool it would be to get many of the families in our homeschooling community setup with a C-64 rig from my collection, and then get the kids swapping disks with programs they write and stuff :) I could even make a little type-in program for our monthly newsletter...

 

More realistically, perhaps some PC/Mac based environment (like an 8-bit emulator, but less esoteric) could be developed with this sharing/community idea built-in. Or maybe something like this exists already, but I haven't found it yet?

Link to comment
Share on other sites

If I understood you right, you worked on the C64DTV with Jeri, right?

Yeah, I helped with testing during the hardware development (*not* during production :)) and was the lead programmer in the software team.

 

What do you think of the idea of producing an educational C64? I mean, you're obviously having good luck using it with homeschoolers, so is there any reason why the concept couldn't be productized as the article suggested?

Yeah, Jeri and I discussed this at length back in 2004, and we both really liked the idea. I was thinking about it's potential as a tool to teach programming; she was thinking of it's potential to teach computer hardware. Both sides are lacking a cheap, versatile platform. I suppose it's still possible if the right investor comes along :)

 

In the meantime I've been thinking about a C-64 cartridge that could add all the features necessary to turn existing C-64s into perfect learning tools - maybe a few languages in ROM, built-in debugger, and some sort of file storage system.

Link to comment
Share on other sites

Everyone else: I don't really understand the adversion to OOP coding. It's pretty easy to wrap your head around once you've been exposed to C.

 

I agree. Although my first formal introduction to modern programming actually did start with Java. But I started this in my first semester of college, and our instructor took it for granted that most of us had never programmed on anything more than a C64 in our lives (and in fact, that was generally the case for everyone). So we took the proverbial "Low Road" to Java, and I actually got a good bit out of the language. The next semester I took up C++ and have learned quite a bit about it as well. It made C bearable when I moved on into senior college life and I got an instructor who apparently knew his stuff very well, but had a very very difficult time explaining his code or useage of variables (fpmvxc anyone? :? ).

 

Anyways, I think the C64 is one of many excellent base tool for kids to learn on. Of course given the right amount of time and effort, a kid could also learn some basic C++ stuff too. But the bottom line is what was mentioned earlier... if they're 10, you can almost bank that they can learn some advanced things in C64 BASIC quite easily (and maybe a little C++, too). They have plenty of time to decide to take an interest in programming, and if they do, once they head into college they'll pick up what they need to know about whatever language is popular at that time (might not even be C, C++, Java, and the like). That way they have a basic introduction now, and will get a more formal introduction later should they decide they need (or want) it. Works great. :)

 

If you did want to do a kids-based computer that allowed them to write programs in, say, C++ or something... might be worth it for some of these educational companies to produce a "Kid's Programming Computer", which could plug up directly to the TV and might include an embedded C++ compiler that operates on "push-button" commands on the keyboard. Code could be stored on flash cards, compiled by a single button, and run on the kid's computer, or even on a real desktop. The idea is to have a "kid-friendly" computer that, like the C64, can't be potentially destroyed if you run bad code, but to also allow kids to have an introduction to a more modern language. Computer freezes up somehow, just simply power off and back on and you're all set and ready to roll again. In short, build a C64-esque system (with more memory), but have its native built-in language be a fully functional modern language.

Edited by rockman_x_2002
Link to comment
Share on other sites

PixelBoy: I'm not really sure I understand your argument. You're basically saying that programming on old computers is bad because they're old? I'm not sure that your logic follows. A large majority of today's exceptional programmers grew up programming BASIC on these computers. As Macbeth said, the principles are the same, and you have such utter control over the machine that it makes children feel powerful.

Perhaps, but when they move up to today's PCs, they quickly realize that they are much more complex machines, and they can't "stay close to the metal". Then they realize that programming in Basic on a C64 is a universe apart from programming on today's computers, so all they learned on the C64 is actually counter-productive when they are put in contact with object-oriented languages.

 

As far as I'm concerned, positive results are positive results. You're not going to damage the child by starting them with something they can understand young, so why not? :)

You're assuming that whatever they learn will be used as foundation for learning more complex things down the road. This used to be true, but not anymore. Show 10-year-old kids how to program in Basic on a C64, and then introduce them to Visual Basic soon after. I'm fairly certain they won't like it as much, because the learning curve is higher, and what they learned previously only helps them a little bit. Teenagers are more likely to adjust to VB, but kids will just yawn their jaws off when they see how much work is involved in doing anything interesting in VB.

 

Everyone else: I don't really understand the adversion to OOP coding. It's pretty easy to wrap your head around once you've been exposed to C. Take a simple lightbulb control program:

 

... (quoted text removed for conciseness's sake) ...

 

Easy! :)

 

I've been eagerly wanting to reply to this, but I've been busy elsewhere until now. Presenting a simplistic example of OOP may be a nice selling point to most people, but it doesn't work for me at all, because it conveniently hides the actual problems of OOP:

 

1) First of all, there's the inherent complexity of object-based systems: Usually, individual object classes are well-designed and can be quite elegant by themselves (as your light bulb example demonstrates), but it's when you combine them with other classes to perform a particular task that problems arise. Real-world applications usually include a large group of objects (dozens, sometimes many more) and it's hard for any programmer to maintain a mental image of all the interactions that occur between objects and sub-objects, especially when some of these objects were coded by others and are not understood intimately. Most bugs under OOP crop up because the programmer fails to predict bizarre side-effects of interactions between objects. Also, adding a class to a project usually makes said project more complex. I've seen instances where adding a single object class to an application effectively doubled the internal complexity of that application. Procedural programming may have its faults and disadvantages when compared to OOP, but inherent complexity can usually be easily avoided with some well-designed code, while OOP languages forces the programmer to cope with exponential complexity, which is never fun.

 

2) It's a well-known fact that the analysis phase of any OOP project needs to be much longer and more involved than for old-style programming projects. The problem is that, in real life, there's never enough time allocated to the analysis and design phase. This is a big problem with OOP, because if you design a crappy OO model for an application and fail to correct its flaws early on, you're pretty much stuck with a crappy application for the rest of your life, and everyone who sees your code is going to say 'this thing is so badly designed, it needs to be rewritten from scratch'! OOP specialists usually respond to this issue by saying that the development process of an OO application needs to be done by iterations, correcting and improving an OO model during each iteration, but that's a utopic suggestion. In real life, there's simply no time (and no desire) to reevaluate an OO model repeatedly. People just want to see something that works, and don't care if the application is a nightmare under the hood, so programmers usually take the direct route between A and B and will not question what's already been coded, unless it's absolutely necessary. The bottom line: Programs made with OO languages can be just as nightmarish to maintain as those done with procedural languages. And believe me, I speak from experience.

 

3) You can't do anything remotely complex with an OO language without the help of an IDE (like NetBeans for Java). When you have hundreds of methods spread out over dozens of objects, with inheritance and polymorphism thrown into the mix, you need tools to help you keep your head above water, and even then, it's something one needs to get used to. The problem is that such IDEs are loaded with tools, features and configuration options, which makes their learning curves very steep. And I'm just talking about the IDEs themselves, not the OO language, which usually has its own learning curve.

 

4) About Java specifically, it's no secret that it eats RAM for breakfast, dinner and supper too. It's a memory hog, and there's no way to control how Java allocates memory. The best you can hope for is that the garbage collector does its job in an efficient matter, which is actually something that has improved over the last few years, but the fact remains that programmers have pretty much no direct control over actual memory allocation.

 

You might ask yourself why OOP caught on so well despite these problems. The answer can be summed up with a simple accronym: GUI. Windows, buttons, menus, textboxes, etc. lend themselves naturally to object-oriented paradigms, and because of this, programmers will code a GUI-driven application using OO, and will naturally code the rest of that application using OO because it's basically the logical thing to do. And yet, this doesn't make OOP's inherent problems go away.

 

I have always believed that good programming is an artform, and that good programmers are artists at heart. Using OOP correctly (to make well-designed and reusable code) is also an artform just the same. That's why having good teachers in computer science is crucial, now more than ever. OOP has never managed to truly impress me, despite its obvious advantages, because its flaws are too big to ignore.

Link to comment
Share on other sites

Perhaps, but when they move up to today's PCs, they quickly realize that they are much more complex machines, and they can't "stay close to the metal". Then they realize that programming in Basic on a C64 is a universe apart from programming on today's computers, so all they learned on the C64 is actually counter-productive when they are put in contact with object-oriented languages.

Nonsense. You have to learn to walk before you can run. In the case of programming, "walking" means being able to put one instruction after another. I've taught programming to quite a few people in my days, and you know what one of the biggest beginner mistakes is? Realizing that one instruction flows to the next! It may seem so stupidly simple to you and I, but to the programming illiterate, they fail to understand why just having the command there doesn't work. (I remember making similar mistakes when I was a child. I misordered the IF statements in a slot machine program, and thus failed to catch more massive payouts.)

 

Computers have gone from simple to complex. As a result, one of the best teaching methods is to take a student from simple to complex. I usually prefer the following track:

 

Line Procedural Programming -> Function Procedural Programming -> Object Oriented Programming

 

Each step adds another layer of complexity, but only after the student is ready to absorb that complexity.

 

Show 10-year-old kids how to program in Basic on a C64, and then introduce them to Visual Basic soon after.

Now there's a good way to screw him up for life. We're supposed to be teaching children, not torturing them! :sad:

 

Visual BASIC is not real programming. You can accomplish real programming, but only after you wrestle with all the RAD tools so you can get some real work done. I've found that the VB "language" is best approached after one has a firm grasp on computer fundamentals. Only then can one see through the haze to understand what the VB runtime is doing. Otherwise, programmers just take shot after shot in the dark, trying to tweak the VB code to do what they need.

 

Honestly, I can't think of a single thing that has done as much damage to the computer industry as Visual BASIC has.

 

 

there's the inherent complexity of object-based systems: Usually, individual object classes are well-designed and can be quite elegant by themselves (as your light bulb example demonstrates), but it's when you combine them with other classes to perform a particular task that problems arise. Real-world applications usually include a large group of objects (dozens, sometimes many more) and it's hard for any programmer to maintain a mental image of all the interactions that occur between objects and sub-objects, especially when some of these objects were coded by others and are not understood intimately.

You're talking about bugs that occur in ALL coding, regardless of OOP style or not. OOP was partly developed as a method of categorizing and organizing that information so that it's easier to keep track of. I know that I would much, much, much rather work on complex OOP code (presuming that the code was actually done in OOP style) than I would a massive procedural system. Or to put that into concrete examples, J2EE applications kick the stuffing out of managing SAP deployments. There are still people who wake up every night in a cold sweat, wondering if they didn't totally hose their SAP system without realizing it.

 

Most bugs under OOP crop up because the programmer fails to predict bizarre side-effects of interactions between objects apis.

Fixed it for you.

 

Also, adding a class to a project usually makes said project more complex. I've seen instances where adding a single object class to an application effectively doubled the internal complexity of that application. Procedural programming may have its faults and disadvantages when compared to OOP, but inherent complexity can usually be easily avoided with some well-designed code, while OOP languages forces the programmer to cope with exponential complexity, which is never fun.

I don't understand your point here. A single class cannot double the complexity unless the class is poorly designed. (Ususally, thousands upon thousands of procedural lines of code stuffed into a single class. Yuck.) There's also nothing inherently exponential about OOP design. It can be as flat, or as heirarchical as you want it to be. I've seen some pretty bad OOP designs where the entire application is shattered into billions of tiny objects, with just one or two monster objects controlling everything, but then I've also seen millions of lines of spagetti procedural code that's even worse to maintain. Both come back to poor code design, and have nothing to do with the abilities (or lack thereof) offered by OOP vs. Procedural design.

 

It's a well-known fact that the analysis phase of any OOP project needs to be much longer and more involved than for old-style programming projects. The problem is that, in real life, there's never enough time allocated to the analysis and design phase.

Such is life. The same is actually true with procedural programming, it's just that maintenece on procedural code tends to require a massive scaling of programmers, where most OOP environments attempt to constrain the number of programmers to keep too many cooks from spoiling the broth. The best solution in both cases is to keep up on your code refactorings. If ugly code is refactored as needed, then a redesign/rewrite can be considerably slowed. Eventually you will reach a point, though, where the entire codebase will need a rewrite. For an example, I point you to Netscape->Mozilla. Netscape Inc. just couldn't take the code any farther, so a ground-up rewrite was necessary. Netscape was not OOP code.

 

This is a big problem with OOP coding, because if you design crappy OO model code for an application and fail to correct its flaws early on, you're pretty much stuck with a crappy application for the rest of your life

Fixed it for you. And before you disagree, I have four words for you: Coldfusion and Stored Procedures

 

 

You can't do anything remotely complex with an OO language without the help of an IDE (like NetBeans for Java).

Utter nonsense. If you can't handle your code without an IDE, then you don't know what you're doing. IDEs can provide a lot of helpful information and tools, but they're nowhere near required. I like Netbeans, myself, but I still work on exceedingly complex projects with nothing more than JEdit, the JDK, and regular JavaDocings. In my experience, that last point is the key failure of programmers new to Java. They never take advantage of the auto-documentation features of the language, so they never get a good feel for what code exists. The FIRST thing you should do on any project is look up the JavaDocs for the codebase. If the system is still completely undecodable, then you probably have a lot of procedural code stuffed into a few classes. Have fun. :(

 

About Java specifically, it's no secret that it eats RAM for breakfast, dinner and supper too. It's a memory hog, and there's no way to control how Java allocates memory.

:roll: Java eats a base 20 megs for all the libraries it loads. Most of that gets swapped out to disk after the load. The remainder is as big as your program is. Something like JDiskReport will only show a few extra megs of usage, while something like Netbeans can balloon to well over a hundred megs. Similarly, something like Stella can show only a few megs of usage, while something like Firefox can balloon into several hundred megs.

 

That's just the nature of the beast. More code complexity == More memory eaten. Thankfully, the memory in machines has grown to accomidate increasing complexity. So it wouldn't be such a problem if Windows didn't have such terrible memory management. The Windows VMM is still designed for machines with 16 megs of memory. As a result, it overcompensates on swapping data to disk. This creates problems for a garbage collecting program like Java, as Java's attempts to check references can cause the VMM to thrash. If you try the same code on OSes like Linux, FreeBSD, Solaris, or Mac OS X, the thrashing issues disappear.

 

 

You might ask yourself why OOP caught on so well despite these problems.

No, not really. Considering I was an early advocate of Java technology, I'm quite well aware of why it caught on. ;)

 

The answer can be summed up with a simple accronym: GUI. Windows, buttons, menus, textboxes, etc. lend themselves naturally to object-oriented paradigms, and because of this, programmers will code a GUI-driven application using OO, and will naturally code the rest of that application using OO because it's basically the logical thing to do. And yet, this doesn't make OOP's inherent problems go away.

Allow me to correct your history. 12-15 years ago (has it really been that long?) there was a huge debate in the industry. C++ was promising faster development times, easier to maintain code, fewer bugs per KLOC, etc, etc, etc. The C and the C++ camps were split, with the former calling the latter "slow", "difficult to use", and "needlessly complex". As a result, most GUI system were built in C, not C++. Wrappers (like MFC) were created to appease the OOP faithful. But the truth was that C++ didn't really carry the promise of OOP as far as it should have. That title belonged to projects like Smalltalk.

 

While C++ was mostly a preparser over the C language, Smalltalk was a ground-up effort with different memory management, a lack of primitives, code reflection, and other advanced features that OOP programming was supposed to buy. The C++ faithful were unaware of what they were supposed to have, though, so they continued championing a simple inversion of C structs. Those that did see Smalltalk wrote it off as an ivory-tower academic project.

 

Enter Java. Originally started as a project to allow individual C++ classes to be recompliled without recompiling the entire codebase, it began to evolve toward many of the features that Smalltalk had. Different from Smalltalk, however, it struck a balance between the pragmatic (e.g. primitive types) and the ivory tower (e.g. everything is an object). That alone made it a nice language to work with. But what really made it beautiful was the absolute lack of language features. The language itself held practically no syntatic sugar, instead favoring a push of all features to the libraries. This design meant that learning the language was easy, and making use of advanced features became as easy as plugging into another library. The coup de grace of the platform was that it shipped with a standard API set far in advance of what C and C++ programmers were used to.

 

By 1996, web development was in full swing. Java had managed to find a niche in the form of embedded applets. (A good idea gone wrong, if I've ever seen one.) Developers learned Java in order to create a few of these applets, but quickly fell in love with the power of its APIs. It wasn't long before the Java movement began championing the platform for non-applet use in a variety of situations. Early attempts at GUI programming ultimately were unsuccessful. The platform simply wasn't mature enough. But Java did find a huge niche to exploit as web pages became more dynamic.

 

The CGI of the day was ineffecient and horribly unscalable to the task of generating dynamic web pages. Coldfusion and ASP appeared to solve the effciency problems, but they proved to be similarly unscalable. Java, on the other hand, could dynamically deploy individual classes to be part of a larger system, handle networking on a whim, could force code to follow modular standards, cleanly handled program errors in a shared system, and could easily access databases where custom routines had previously had to be used (God help you if you were using SQL embedded in C). So with barely a handful of classes (known as "servlets") the most incredible server side technology known to man was born.

 

This post is already getting WAY to long, but suffice it to say that Java further put its advatages to good use in JSP pages, and eventually a complete J2EE framework. Java's design has meant that it has been on the leading edge of new technological developments while other languages/platforms struggle to keep up. Without Open Source working tirelessly on new libraries, C/C++/PHP programmers would be unable to keep pace. They'd still be having to purchase APIs at exhorbant prices, or rolling their own semi-compatible solutions.

 

I think the prime reason why it seems that OOP is so much more complex, is that Java is being used to tackle extremely complex problems. You used to need an army of hundreds of programmers to develop a system that is as complex and powerful as today's J2EE applications. Thanks to the modularity of OOP, however, we can just plug in an API and go. This means that fewer developers are necessary to a project's completition, but it also means that they will feel the strain as maintenance issues set in. The world of computer science is still looking for methods to reduce the costs of maintenance, but progress is slow. So in the meantime, we have to do the best we can with the tools we have at hand.

 

 

I have always believed that good programming is an artform, and that good programmers are artists at heart. Using OOP correctly (to make well-designed and reusable code) is also an artform just the same. That's why having good teachers in computer science is crucial, now more than ever. OOP has never managed to truly impress me, despite its obvious advantages, because its flaws are too big to ignore.

 

This argument does not follow from your previous argument. If a child should learn how to handle OOP correctly, then (s)he needs to learn about all the developments that lead up to the OOP paradigm. Otherwise he'll have no frame of reference for understanding the issues he faces. As a bonus, he'll learn a lot more about what his machine is doing under the covers, so that he won't be surprised when something unexpected happens that can be easily explained by the way the hardware works. (Raise your hand if you were shocked when you first realized that you couldn't correctly compute money using floating point numbers!) :)

Link to comment
Share on other sites

Perhaps, but when they move up to today's PCs, they quickly realize that they are much more complex machines, and they can't "stay close to the metal". Then they realize that programming in Basic on a C64 is a universe apart from programming on today's computers, so all they learned on the C64 is actually counter-productive when they are put in contact with object-oriented languages.

Nonsense. You have to learn to walk before you can run. In the case of programming, "walking" means being able to put one instruction after another. I've taught programming to quite a few people in my days, and you know what one of the biggest beginner mistakes is? Realizing that one instruction flows to the next! It may seem so stupidly simple to you and I, but to the programming illiterate, they fail to understand why just having the command there doesn't work. (I remember making similar mistakes when I was a child. I misordered the IF statements in a slot machine program, and thus failed to catch more massive payouts.)

 

Computers have gone from simple to complex. As a result, one of the best teaching methods is to take a student from simple to complex. I usually prefer the following track:

 

Line Procedural Programming -> Function Procedural Programming -> Object Oriented Programming

 

Each step adds another layer of complexity, but only after the student is ready to absorb that complexity.

I guess I can agree with that kind of logic up to a certain point. My only question is this: At what point should we leave "line procedural programming" behind and have a different starting point for introducing computer programming to kids? I can't imagine kids still using C64s 100 years from now...

 

 

 

Show 10-year-old kids how to program in Basic on a C64, and then introduce them to Visual Basic soon after.

Now there's a good way to screw him up for life. We're supposed to be teaching children, not torturing them! :sad:

 

Visual BASIC is not real programming. You can accomplish real programming, but only after you wrestle with all the RAD tools so you can get some real work done. I've found that the VB "language" is best approached after one has a firm grasp on computer fundamentals. Only then can one see through the haze to understand what the VB runtime is doing. Otherwise, programmers just take shot after shot in the dark, trying to tweak the VB code to do what they need.

 

Honestly, I can't think of a single thing that has done as much damage to the computer industry as Visual BASIC has.

It depends on how you look at it. From a programming standpoint, it's true that VB doesn't have much going for it, but when it first came out, VB offered the novelty of building an interface with nothing more than a mouse and then coding the application behind it. It's been done in other IDEs, and perhaps it's not the ideal way to work, but it's user-friendly enough for anyone to get into it.

 

 

You're talking about bugs that occur in ALL coding, regardless of OOP style or not. OOP was partly developed as a method of categorizing and organizing that information so that it's easier to keep track of. I know that I would much, much, much rather work on complex OOP code (presuming that the code was actually done in OOP style) than I would a massive procedural system.

I guess that's where we differ you and I. To me, a massive OOP application that is badly coded is the worst nightmare imaginable. Of course, if it's well-designed, then maintaining an OO application is indeed far easier than any procedural equivalent. But in real life, well-designed OOP is somewhat of a rarity.

 

Most bugs under OOP crop up because the programmer fails to predict bizarre side-effects of interactions between objects apis.

Fixed it for you.

Yeah, okay, you got a point there. :)

 

Also, adding a class to a project usually makes said project more complex. I've seen instances where adding a single object class to an application effectively doubled the internal complexity of that application. Procedural programming may have its faults and disadvantages when compared to OOP, but inherent complexity can usually be easily avoided with some well-designed code, while OOP languages forces the programmer to cope with exponential complexity, which is never fun.

I don't understand your point here. A single class cannot double the complexity unless the class is poorly designed. (Ususally, thousands upon thousands of procedural lines of code stuffed into a single class. Yuck.) There's also nothing inherently exponential about OOP design. It can be as flat, or as heirarchical as you want it to be. I've seen some pretty bad OOP designs where the entire application is shattered into billions of tiny objects, with just one or two monster objects controlling everything, but then I've also seen millions of lines of spagetti procedural code that's even worse to maintain. Both come back to poor code design, and have nothing to do with the abilities (or lack thereof) offered by OOP vs. Procedural design.

Perhaps we're simply not seing the term "complexity" in quite the same way. For example, when I say that I've seen the inclusion of a single class double the complexity of an application, what I mean is that the new class was so alien to the initial design that a lot of the code in the other objects of the application had to be adjusted, modified or even augmented. And yes, it was admitedly a badly-designed application from the beginning. To me, that's added complexity.

 

It's a well-known fact that the analysis phase of any OOP project needs to be much longer and more involved than for old-style programming projects. The problem is that, in real life, there's never enough time allocated to the analysis and design phase.

Such is life. The same is actually true with procedural programming, it's just that maintenece on procedural code tends to require a massive scaling of programmers, where most OOP environments attempt to constrain the number of programmers to keep too many cooks from spoiling the broth. The best solution in both cases is to keep up on your code refactorings. If ugly code is refactored as needed, then a redesign/rewrite can be considerably slowed. Eventually you will reach a point, though, where the entire codebase will need a rewrite. For an example, I point you to Netscape->Mozilla. Netscape Inc. just couldn't take the code any farther, so a ground-up rewrite was necessary. Netscape was not OOP code.

I get that. :) Perhaps my experience of procedural programming is overshadowing my limited experience in OOP a little too much, but I've found that when an initial application design is faulty, having the data separate from the functions (unlike OOP which encapsulates both together) makes major design corrections a little easier to manage. But then again, I've always had a VERY modular approach to procedural programming, so perhaps my argument is not very solid to begin with.

 

You can't do anything remotely complex with an OO language without the help of an IDE (like NetBeans for Java).

Utter nonsense. If you can't handle your code without an IDE, then you don't know what you're doing. IDEs can provide a lot of helpful information and tools, but they're nowhere near required. I like Netbeans, myself, but I still work on exceedingly complex projects with nothing more than JEdit, the JDK, and regular JavaDocings.

You have my respect, sir. I have such a hard time keeping up with everything inside OO code that I give myself headaches if I stick with it for too long. An IDE gives me some tools to aleviate that problem, but the problem is still there.

 

About Java specifically, it's no secret that it eats RAM for breakfast, dinner and supper too. It's a memory hog, and there's no way to control how Java allocates memory.

:roll: Java eats a base 20 megs for all the libraries it loads. Most of that gets swapped out to disk after the load. The remainder is as big as your program is. Something like JDiskReport will only show a few extra megs of usage, while something like Netbeans can balloon to well over a hundred megs. Similarly, something like Stella can show only a few megs of usage, while something like Firefox can balloon into several hundred megs.

Is that supposed to impress me on some level? For instance, I can't see what's so acceptable about Java eating 20 megs just for librairies.

 

 

Allow me to correct your history. 12-15 years ago (has it really been that long?) there was a huge debate in the industry. C++ was promising faster development times, easier to maintain code, fewer bugs per KLOC, etc, etc, etc. The C and the C++ camps were split, with the former calling the latter "slow", "difficult to use", and "needlessly complex". As a result, most GUI system were built in C, not C++. Wrappers (like MFC) were created to appease the OOP faithful. But the truth was that C++ didn't really carry the promise of OOP as far as it should have. That title belonged to projects like Smalltalk.

 

While C++ was mostly a preparser over the C language, Smalltalk was a ground-up effort with different memory management, a lack of primitives, code reflection, and other advanced features that OOP programming was supposed to buy. The C++ faithful were unaware of what they were supposed to have, though, so they continued championing a simple inversion of C structs. Those that did see Smalltalk wrote it off as an ivory-tower academic project.

I studied SmallTalk during my university years... I never hated a language more than that one...

 

 

Enter Java. Originally started as a project to allow individual C++ classes to be recompliled without recompiling the entire codebase, it began to evolve toward many of the features that Smalltalk had. Different from Smalltalk, however, it struck a balance between the pragmatic (e.g. primitive types) and the ivory tower (e.g. everything is an object). That alone made it a nice language to work with. But what really made it beautiful was the absolute lack of language features. The language itself held practically no syntatic sugar, instead favoring a push of all features to the libraries. This design meant that learning the language was easy, and making use of advanced features became as easy as plugging into another library. The coup de grace of the platform was that it shipped with a standard API set far in advance of what C and C++ programmers were used to.

I see what you're saying, but Java's libraries have grown so much over the years that it's become a huge issue in terms of learning Java. Most people tell me that if you need to do something in Java, chances are it's already been done by someone else in an equivalent fashion, so I just need to look for it and use it. A practical and noble idea, but tedious when you get down to doing the actual research. :P

 

By 1996, web development was in full swing. Java had managed to find a niche in the form of embedded applets. (A good idea gone wrong, if I've ever seen one.) Developers learned Java in order to create a few of these applets, but quickly fell in love with the power of its APIs. It wasn't long before the Java movement began championing the platform for non-applet use in a variety of situations. Early attempts at GUI programming ultimately were unsuccessful. The platform simply wasn't mature enough. But Java did find a huge niche to exploit as web pages became more dynamic.

 

The CGI of the day was ineffecient and horribly unscalable to the task of generating dynamic web pages. Coldfusion and ASP appeared to solve the effciency problems, but they proved to be similarly unscalable. Java, on the other hand, could dynamically deploy individual classes to be part of a larger system, handle networking on a whim, could force code to follow modular standards, cleanly handled program errors in a shared system, and could easily access databases where custom routines had previously had to be used (God help you if you were using SQL embedded in C). So with barely a handful of classes (known as "servlets") the most incredible server side technology known to man was born.

Yep, it's one of the things I actually very much appreciate about Java. :D

 

This post is already getting WAY to long,

Long, but informative. I actually like those kinds of stuffy discussions. :) Also, you have to keep in mind that my experience of OOP has been fragmentary, and I haven't had the best teachers, which has lead me to develop a certain distrust of OOP in general. Would you believe I haven't even yet developed the ability to spot where an interface would be useful in my Java programs? Up until now, my Java (and C++) programs have been pretty simple compared to what others have done in my company, so I haven't encountered an actual need for an interface, but I guess I'll learn as I go.

 

This argument does not follow from your previous argument. If a child should learn how to handle OOP correctly, then (s)he needs to learn about all the developments that lead up to the OOP paradigm. Otherwise he'll have no frame of reference for understanding the issues he faces.

I can't agree with that. I understand the concept of learning from the past to better understand the present, and someone who's very curious about all facets of computer programming should necessarily follow that precept, but I feel it's just not necessary for kids. It's a question of generations: As an analogy, you can see how a lot of kids of today who play with PS2s and Xboxes simply aren't interested in old consoles (one look at the Atari 2600's blocky graphics will make them laugh and/or yawn). They may accept the fact that old consoles played an important part in the way they play today's games, but beyond that, they don't usually feel the need to go back in time, so to speak. I agree that a lot of the academic stuff taught in computer programming comes directly from what's been done over the last 50+ years, but I believe kids can learn the material without the historical context. Note that I'm talking about kids here, not teenagers or college/university students. I agree that older computer students necessarily need a more contextual frame of reference to become good analysts and coders.

Link to comment
Share on other sites

 

6. Principles - I'm interested in teaching general principles that can be applied. Things like IF/THEN, FOR/NEXT, variables and so forth can be applied to any language. It's probably 15 years before many of these kids will be looking for a real job - is teaching Java really going to be that much more useful to them that long for now? Besides, I'm not opposed to them learning more languages in the future.

 

 

I'm not a programmer/coder but am an advanced Excel user and support the theory that exposure to someting simple like BASIC provides a very good grounding for writing Excel macros and formulas.

 

When I went back to uni 10 years ago to do my Business degree I found that the BASIC courses I did at high school in the late 70's helped me go streets ahead of the kids straight out of school who really did not know any of the logic principles required to write complex but "clean" macros.

Edited by AussieAtari
Link to comment
Share on other sites

I guess I can agree with that kind of logic up to a certain point. My only question is this: At what point should we leave "line procedural programming" behind and have a different starting point for introducing computer programming to kids? I can't imagine kids still using C64s 100 years from now...

Probably when there's a fundamental shift in computer design and programming, -OR- when a replacement for BASIC finally comes along. The former might happen (hey, we could all be using Quantum Computers in 50 years), but the latter is more likely. Most of the teaching aspects of programming have been completely downplayed by the modern market. As a result, the Commodore ends up being the best choice. If someone shows the market the way (e.g. the suggested eC64), then you can expect that there will be plenty of "My First Computer for Programming" products in the future.

 

 

It depends on how you look at it. From a programming standpoint, it's true that VB doesn't have much going for it, but when it first came out, VB offered the novelty of building an interface with nothing more than a mouse and then coding the application behind it. It's been done in other IDEs, and perhaps it's not the ideal way to work, but it's user-friendly enough for anyone to get into it.

VB is a Rapid Application Development environment. It is to programming what Microsoft Access is to a SQL database. i.e. A fast, easy, but dirty way to get the job done. One might even think of it as the GUI equivalent of a PERL script. ;)

 

Like most PERL scripts, however, the results are not scalable, and definitely not easy to maintain. Thus extreme care should be given not to mistake the tool as a serious software development platform. :)

 

I guess that's where we differ you and I. To me, a massive OOP application that is badly coded is the worst nightmare imaginable. Of course, if it's well-designed, then maintaining an OO application is indeed far easier than any procedural equivalent. But in real life, well-designed OOP is somewhat of a rarity.

Well designed code is a rarity, regardless of OOP styling. As we all know, maintenance of poor code is always difficult and troubling. Is it really all that different maintaining a program with screwy structs, lotso' globals, function calls 10 pages long, and complete inflexibility in its handling of data inputs?

 

As I've been saying, it's not really anything specific to OOP, it's just that the market has embraced OOP nearly completely. So you don't see highly complex procedural code being written and maintained as much. Also, keep in mind that your inexperience with OOP coding may be impacting your ability to maintain it. One of the beautiful things about OOP is that you can more easily layer a chinese firewall between newer, better modifications and old crud. Creating such code, however, requires a deep understanding of how to effectively use the tools at your disposal. :)

 

 

Perhaps we're simply not seing the term "complexity" in quite the same way. For example, when I say that I've seen the inclusion of a single class double the complexity of an application, what I mean is that the new class was so alien to the initial design that a lot of the code in the other objects of the application had to be adjusted, modified or even augmented. And yes, it was admitedly a badly-designed application from the beginning. To me, that's added complexity.

How does that differ from adding a new struct? If that struct is alien to the design, then your design may have to be adjusted. Usually that means you have the choice of either refactoring your code, or building a compatible interface between the two apis.

 

 

But then again, I've always had a VERY modular approach to procedural programming, so perhaps my argument is not very solid to begin with.

[...]

Would you believe I haven't even yet developed the ability to spot where an interface would be useful in my Java programs?

I think you'll find that modular design is much easier in OOP, once you get a feel for it. The constraints that interfaces and abstractions allow your program are positive, in that they prevent the junior idiots (you know the type I'm talking about) from doing something that would cause major breakage. If you're finding interfaces to be a burden rather than a blessing, then you're probably looking at the code the wrong way. That bit is supposed to be hard to change. It's to encourage you to find the OOP-clean way of making the change. ;)

 

You have my respect, sir. I have such a hard time keeping up with everything inside OO code that I give myself headaches if I stick with it for too long. An IDE gives me some tools to aleviate that problem, but the problem is still there.

If I can make a recommendation, throw the IDE away. If you're dependent on it, then you're not really learning to use your tools effectively. I'd get into my "Which IDE should you chose for teaching Java? That's a flawed question: NONE" rant, but then we'd really spin off into a different topic. :D

 

Is that supposed to impress me on some level? For instance, I can't see what's so acceptable about Java eating 20 megs just for librairies.

Impress you? No. I'm only pointing out that Java does not "eat RAM for breakfast" as you claimed. The VM invocation has a cost commiserate with loading a complete platform. Windows programs have this cost as well, it's just hidden inside the Operating System. Because these costs are part of the OS, the system is a bit more efficient with its memory. Experiments have been done to eliminate this base cost through VM sharing*, but the large memory capacities of modern systems have made this less pressing.

 

I see what you're saying, but Java's libraries have grown so much over the years that it's become a huge issue in terms of learning Java.

I agree wholeheartedly. I always thought that the Java 2 bloatware was a mistake. Sun got better about including only necessary features in future revisions, but they can't go back and decrudify the Java 2 mistake. :(

 

FWIW, I always point newbies at Java to the Java 1.1 documentation. The API back then was so clean and beautiful, and remains the "core" of Java to this day. (You'll note that packages are split between java.* and javax.*. The javax.* stuff is standardized extensions.) Once you get used to the core, finding APIs in the newer releases becomes a lot easier. It's still quite daunting, but you quickly find that the APIs are generally laid out in a reasonably logical manner.

 

Most people tell me that if you need to do something in Java, chances are it's already been done by someone else in an equivalent fashion, so I just need to look for it and use it.

'Tis true. I consider myself an expert in the platform, and yet I didn't realize that an extensive printing API had been added until I started Googling for one. When in doubt, Google can be your friend. Even one mention of the correct package can quickly serve to throw the blindfold off. :)

 

 

Up until now, my Java (and C++) programs have been pretty simple compared to what others have done in my company, so I haven't encountered an actual need for an interface, but I guess I'll learn as I go.

I don't know if it helps or not, but here a tip: If you find yourself creating classes that use inheritance, take a step back and look at your design again. Does the base class implement most of its methods, or is it an empty shell? If it's the latter, you DEFINITELY want to use an interface instead. An advanced trick is to use an interface to allow for maximum implementation flexibility, but use an Abstract base class to provide a convenience implemenation of many of the functions. (The java.awt.event system is full of this sort of design.) This allows you to simplify your coding (i.e. use the base class if it makes sense) without sacrificing the flexibility to do a complete (but compatible!) reimplementation of the design.

 

Also, if you really, really, really, wish that Java had multiple inheretance, then your program is in need of interfaces.

 

I can't agree with that. I understand the concept of learning from the past to better understand the present, and someone who's very curious about all facets of computer programming should necessarily follow that precept, but I feel it's just not necessary for kids. It's a question of generations

Let me ask you: Did you ever consider why train tracks are always 4' 8½" in width? A naive but experienced engineer working on a new rail project might realize that this rail size is unsuitable for the current project, but would question the wisdom of changing the width. For all he knows, the size might have been standardized on due to certain weight distribution and/or energy dissapation properties. Modifying the size of the rail could have consequences that the engineer is unprepared to deal with. So he propogates the status quo.

 

The well educated engineer looks at the same issue, and sees no problems with changing the rail width. You see, he knows that the size of rails have been maintained only for compatibility across different engines and cars. (Compatibility he won't have to worry about in this particular project.) The original rail widths were set by the reuse of measuring tools, originally intended for roads. Those tools were based on the same ones used by the Roman Empire for building their roads. The Romans set the size of their tools based on the width of their Chariots. The width of their chariots was designed to allow for exactly two horses to stand side-by-side. And thus the engineer avoids the pitfall of basing his design on the approximate width of two horse's asses. ;)

 

 

As an analogy, you can see how a lot of kids of today who play with PS2s and Xboxes simply aren't interested in old consoles (one look at the Atari 2600's blocky graphics will make them laugh and/or yawn). They may accept the fact that old consoles played an important part in the way they play today's games, but beyond that, they don't usually feel the need to go back in time, so to speak.

Odd. My kids think the old systems are a hoot. :ponder:

 

Did you know that kids will also stick to junk food if you never expose them to better cusine? True fact. :D

 

I agree that a lot of the academic stuff taught in computer programming comes directly from what's been done over the last 50+ years, but I believe kids can learn the material without the historical context. Note that I'm talking about kids here, not teenagers or college/university students.

See, I look at it like the "new math". Supposedly, the new math prevents kids from developing bad habits that have to be unlearned later in life. This "prepares them" for an eventual college-level education. In practice, "new math" has been an utter failure. Schools that have stuck with flash cards and other traditional methods have regularly outperformed those that learned the new methods. Why is that?

 

My opinion is this: Children don't need abstractions. They can't understand them, they can't relate to them, and they have absolutely nothing to abstract from. So instead of saving them from having to "unlearn" bad habits, you end up killing off their ability to learn.

 

Think about it. If you decide to start a child with Java, you have to teach him:

 

Objects -> Methods -> Line Programming

 

Why? Because a simple "Hello World" looks like this:

 

public class Hello
{
public static void main(String[] args)
{
	System.out.println("Hello World!");
}
}

 

The program won't even compile unless the child can navigate:

 

1. Creating the Hello class.

2. Creating a main method.

3. Understand what the System object is.

4. Understand that "out" is a property of the System object.

5. Understand that "out" is an instance of java.io.PrintWriter that wraps the Standard Output (STDOUT).

6. Understand that "println" is a method of java.io.PrintWriter.

 

Compare that to:

 

10 PRINT "HELLO WORLD!"

 

The child learns that:

 

1. PRINT makes text appear on the screen.

2. Text should have quotes around it.

 

Now let's go forward, and abstract:

 

#include <stdio.h>

void main()
{
printf("Hello world!");
}

 

The child already understands that the "Hello World" is text for the program to use. Now he must learn:

 

1. What a function is. (Abstractable from a gosub.)

2. What an include file is. (It's extra features that your code can use, without retyping.)

3. What STDIO is. (Abstractable from the screen output that the child has been using.)

 

Going forward...

 

public class Hello
{
public static void main(String[] args)
{
	System.out.println("Hello World!");
}
}

 

The child must learn:

 

1. What a method is. (Abstractable from functions.)

2. What a property is. (Abstractable from global variables.)

3. What an object is. (Abstractable from data structures.)

4. The intracacies of Java Streams (Abstractable from C-style data streams.)

5. What access control is. (Abstractable from "published" and "unpublished" APIs.)

 

You don't start building a house by shingling the roof. By the same token, a child's education cannot start from a basis that they don't have. :)

 

 

* The latest JVM gets about a 20% reducation in base memory usage through the sharing of class files. Unfortunately, you need special tools to see shared memory on Windows. In the Task Manager, each process will appear to have the same amount of memory in use. This is similar to how X-Windows shows up using hundreds of megs of memory on Unix systems, due to the mapping of VideoRAM.

Link to comment
Share on other sites

Probably when there's a fundamental shift in computer design and programming, -OR- when a replacement for BASIC finally comes along. The former might happen (hey, we could all be using Quantum Computers in 50 years), but the latter is more likely. Most of the teaching aspects of programming have been completely downplayed by the modern market. As a result, the Commodore ends up being the best choice. If someone shows the market the way (e.g. the suggested eC64), then you can expect that there will be plenty of "My First Computer for Programming" products in the future.

Now there's a neat idea...

 

Well designed code is a rarity, regardless of OOP styling. As we all know, maintenance of poor code is always difficult and troubling. Is it really all that different maintaining a program with screwy structs, lotso' globals, function calls 10 pages long, and complete inflexibility in its handling of data inputs?

 

As I've been saying, it's not really anything specific to OOP, it's just that the market has embraced OOP nearly completely. So you don't see highly complex procedural code being written and maintained as much. Also, keep in mind that your inexperience with OOP coding may be impacting your ability to maintain it. One of the beautiful things about OOP is that you can more easily layer a chinese firewall between newer, better modifications and old crud. Creating such code, however, requires a deep understanding of how to effectively use the tools at your disposal. :)

You know, I felt really depressed for a while after seing your responses to my posts, not because my arguments against OOP were debunked (I'm big enough to take a good counter-argument), but because it made me realize how much I still have to learn. Like I've said before, I haven't had the best teachers, and all the books I've read on the subject only cover the basics ("yes, I know what inheritance and polymorphism are, can we please move on???"). I'm at a point where I need some hard experience, or at least some real code to study and learn from. My main problem is that my life is already full of things I need to do, and what little free time I have is spent on more or less trivial things (like posting on these boards, for example). I talked to my boss today about it, and he told me he's got plans for me in terms of Java development, so I can be at least a little bit more hopeful.

 

One thing that I do not want to do is learn Java and C++ at the same time. I tried it before, and it confused the hell out of me... That's one of the main reasons why I stopped working on the BasicVision project, months ago (lack of free time and motivation are the other reasons) but darn it, I WILL get back into that project eventually, if it's the last thing I do!

 

I think you'll find that modular design is much easier in OOP, once you get a feel for it. The constraints that interfaces and abstractions allow your program are positive, in that they prevent the junior idiots (you know the type I'm talking about)

Yeah, that would be me, at this point in time. :P :D

 

...from doing something that would cause major breakage. If you're finding interfaces to be a burden rather than a blessing, then you're probably looking at the code the wrong way. That bit is supposed to be hard to change. It's to encourage you to find the OOP-clean way of making the change. ;)

That's assuming the original code was well-designed...

 

 

If I can make a recommendation, throw the IDE away. If you're dependent on it, then you're not really learning to use your tools effectively. I'd get into my "Which IDE should you chose for teaching Java? That's a flawed question: NONE" rant, but then we'd really spin off into a different topic. :D

I'd follow that advice, but every programmer in my office uses NetBeans, so ditching it myself is not an option.

 

 

I see what you're saying, but Java's libraries have grown so much over the years that it's become a huge issue in terms of learning Java.

I agree wholeheartedly. I always thought that the Java 2 bloatware was a mistake. Sun got better about including only necessary features in future revisions, but they can't go back and decrudify the Java 2 mistake. :(

And that reminds me, there's also the whole deal with making GUI interfaces under Java. I've done a couple of them in an academic situation (using Swing), but getting into it seriously is going to give me headaches, I just know it. I better go buy some Tylenols...

 

I don't know if it helps or not, but here a tip: If you find yourself creating classes that use inheritance, take a step back and look at your design again. Does the base class implement most of its methods, or is it an empty shell? If it's the latter, you DEFINITELY want to use an interface instead. An advanced trick is to use an interface to allow for maximum implementation flexibility, but use an Abstract base class to provide a convenience implemenation of many of the functions. (The java.awt.event system is full of this sort of design.) This allows you to simplify your coding (i.e. use the base class if it makes sense) without sacrificing the flexibility to do a complete (but compatible!) reimplementation of the design.

 

Also, if you really, really, really, wish that Java had multiple inheretance, then your program is in need of interfaces.

I'll keep that advice in mind. Thanks. :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...