Jump to content
IGNORED

Open Lara engine on the ATARI Jaguar


Gunther

Recommended Posts

It runs on the GD for me, on Phoenix emulator it only shows the title, the bars and the letters on top.

 

I agree with AlucardX, don't know about the best slideshow, but despite the speed issue it made me smile seeing this running on my Jag. Nice experiment. It brought back memories from playing it on a friends PSX and knowing that it would never be released on the Jag although in the FAQ it was announced as  unreleased game "Lara Cruz: Tomb Raider" or something like that.

So in that regard it is kinda nice to finally see it in a way on the Jag. ?

  • Like 2
Link to comment
Share on other sites

9 hours ago, 42bs said:

I checked an old GCC (Sourcery G++ Lite 4.4-52) and it does know about.

Where can we find this Sourcery G++ compiler?

12 hours ago, swapd0 said:

I want to have two build versions, one with skunk debug code, the other one "naked"

Out of curiosity, does the OpenLara source code uses a lot of float/double operations?

Link to comment
Share on other sites

10 minutes ago, dilinger said:

Where can we find this Sourcery G++ compiler?

Out of curiosity, does the OpenLara source code uses a lot of float/double operations?

No, there's a directory named fixed with some classes and functions implemented in fixed point, although in the main src directory the same classes uses floating point.

  • Like 1
Link to comment
Share on other sites

2 hours ago, JagChris said:

It looks like @XProger is gearing up for N64 and 32X versions. 

 

Given the speed on the GBA, an N64 version should run rather nicely, even with software rendering.

 

A 32X version using one SH2 and plain C should be slow, but not a slideshow - it's basically how my yeti3d demo on the 32X works. If you made the same kind of optimizations to the 32X version as the GBA version, it should should be at least as fast, and faster if you can pull some dual processor stunts like Vic did for D32XR. The main thing going for the 32X version is you can have all the game logic and everything running on the SH2 rather than the 68000. The SH2 has a mature and stable C/C++ compiler, unlike the JRISC, so acting like the main SH2 is the only processor, and all the MD stuff is just a support chip to be initialized and forgotten about is a valid way of programming on it. So I wouldn't be surprised if an initial build on the 32X ran better than the current build on the Jaguar. In the end, the Jaguar version should be better, but it will take more work to get there.

 

  • Like 4
Link to comment
Share on other sites

1 hour ago, 42bs said:

Here:

https://www.sciopta.de/ftp/freescale-coldfire-4.4-52.7z

It`s no installer (anymore). I used to have it installed in "c:\compiler\gcc" and it may rely on this path.

Thank you. I have also heard from various Amiga developer about the gcc 2.95.3, which is considered to have the best 68K support.

I have tried to get it but am out of luck so far.

Link to comment
Share on other sites

Not 100% sure about 68k support as I mainly used it for Coldfire. But from what I have seen so far, the code quality is good.

The problem is the parameter passing via stack (the ABI). The old MetroWerks compilers used registers which resulted in even better code.

Link to comment
Share on other sites

13 minutes ago, dilinger said:

Thank you. I have also heard from various Amiga developer about the gcc 2.95.3, which is considered to have the best 68K support.

I have tried to get it but am out of luck so far.

 

1 minute ago, 42bs said:

Not 100% sure about 68k support as I mainly used it for Coldfire. But from what I have seen so far, the code quality is good.

The problem is the parameter passing via stack (the ABI). The old MetroWerks compilers used registers which resulted in even better code.

Are these changes that were made to GCC upstream and then removed or are these changes that were made by someone else. If they were made by someone else, were the binaries distributed? If that is true then it is my understanding that they would be required to share the source code to their changes to GCC according the the terms of the GPL.

Link to comment
Share on other sites

11 hours ago, alucardX said:

I can't wait to see how this game starts to run as code is moved over to the RISCs where things can begin to be done efficiently.

It is portable C code. People told me that you can compile to JRISC like you can compile to 68k, yet the only (software) cache is in some game released back then an the source lost. It is great that Doom aparently had the better colors on the Jag thanks to some decisions for CRY totally not related to the JRISC assembly code which was just a necessity. Also this overflow flag: Even the 6502 has it and uses it for SBC (and why do people miss overflow in CMP? As an exception?). The Jag instead has > < >= <= == <> .. all the combinations. Why would 32 bit overflow anyway? You can check for sign in combination with carry, but somehow this is not identical to overflow:

  • negative + positive cannot overflow. The sign is from the larger absolute value. Carry is set if we add a positive value to a small negative value to pull up into positive range: Carry != sign => no overflow
  • negativ + negative can overflow. If everything is okay, sign stays negative and carry is sign extended to be set. If 0 bubble into sign, we have overflown. carry is set and sign is set. => no overflow
  • positive+positive. sign needs to be clear and carry is clear.

So how can you use sign and carry to detect overflow? Why did they even care for that combination? For speedy JRISC code you will want to convert everything to unsigned. I mean, the integer parts to index into a texture need to wrap around much earlier. Carry may be used for the fractional parts in the texture mapping.

  • Confused 1
Link to comment
Share on other sites

7 hours ago, 42bs said:

CodeSourcery was aquired by MentorGraphics which is now Siemens. Back in time, they did provide the sources.

The product was CodeBench, but I doubt that Siemes has the old source archives.

If at some point someone finds the code OR modifies modern gcc for this, I would suggest submitting it to upstream maintainers.

Link to comment
Share on other sites

7 minutes ago, Zerosquare said:

From what I understand, 68000 support has been going downhill in GCC for a long time -- it still works, but the quality of the generated code is worse that it used to be, as it's no longer actively maintained.

What would make it worse? Is it due to the code for the 68K portion not being brought up to date with newer libraries or something like that?

Link to comment
Share on other sites

Just now, Zerosquare said:

GCC architecture is complex and I don't know enough about it to answer for sure.

That is fair. If it is something that would be helpful to the Jaguar community maybe there is a way to find out the status and see what is not working and how to fix it. If it is a net gain for gcc I see no reason why the upstream maintainers wouldn't include it in the official releases.

Link to comment
Share on other sites

6 hours ago, alucardX said:

What would make it worse? Is it due to the code for the 68K portion not being brought up to date with newer libraries or something like that?

It think it's more that the newer 68k code generation is geared towards 32-bit chips, particularly the ColdFire cores. The older gcc when targeting the 68000 would produce faster code as it concentrated on keeping as much of the code as possible using 16-bit constants and offsets and such. The newer compilers produce better code, but not faster as it barely pays any concern to the faster speed word size of the 68000. There's a lot more about this over at SpritesMind since the Genesis/MD used the 68000. People benchmarked the old gcc verses newer versions. I don't think anyone has benchmarked it against anything really new, like 10.x or 11.x.

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Chilly Willy said:

It think it's more that the newer 68k code generation is geared towards 32-bit chips, particularly the ColdFire cores. The older gcc when targeting the 68000 would produce faster code as it concentrated on keeping as much of the code as possible using 16-bit constants and offsets and such. The newer compilers produce better code, but not faster as it barely pays any concern to the faster speed word size of the 68000. There's a lot more about this over at SpritesMind since the Genesis/MD used the 68000. People benchmarked the old gcc verses newer versions. I don't think anyone has benchmarked it against anything really new, like 10.x or 11.x.

This is a question born of my own ignorance, but are there any compiler flags to deal with that situation?

Link to comment
Share on other sites

45 minutes ago, alucardX said:

This is a question born of my own ignorance, but are there any compiler flags to deal with that situation?

I found some info for the 68k family of cpus. Is it helpful to select the 68000 in this way?

 

These are the ‘-m’ options defined for M680x0 and ColdFire processors. The default settings depend on which architecture was selected when the compiler was configured; the defaults for the most common choices are given below.

-march=arch

Generate code for a specific M680x0 or ColdFire instruction set architecture. Permissible values of arch for M680x0 architectures are: ‘68000’, ‘68010’, ‘68020’, ‘68030’, ‘68040’, ‘68060’ and ‘cpu32’. ColdFire architectures are selected according to Freescale’s ISA classification and the permissible values are: ‘isaa’, ‘isaaplus’, ‘isab’ and ‘isac’.

GCC defines a macro __mcfarch__ whenever it is generating code for a ColdFire target. The arch in this macro is one of the -march arguments given above.

When used together, -march and -mtune select code that runs on a family of similar processors but that is optimized for a particular microarchitecture.

-mcpu=cpu

Generate code for a specific M680x0 or ColdFire processor. The M680x0 cpus are: ‘68000’, ‘68010’, ‘68020’, ‘68030’, ‘68040’, ‘68060’, ‘68302’, ‘68332’ and ‘cpu32’. The ColdFire cpus are given by the table below, which also classifies the CPUs into families:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...