|
Post by gsteemso on Aug 3, 2020 20:25:48 GMT
It's been quite some time, but if simply using GOSUB and RETURN isn't helping... wasn't there something about GOTO [variable]? I do know the older BASIC constructs originally meant for dynamic program flow (specifically GOTO [constant] and ON [variable] GOTO) aren't always suitable.
|
|
|
Post by gsteemso on Jul 8, 2020 19:50:20 GMT
I just re-read the original post, and noticed a detail we all overlooked.
There is, by definition, no way to know how many bytes a USR file actually holds. It is as the name says -- "USeR defined". Examples:
- If someone decided to disguise a totally new disk format by labelling it as a single USR file on an otherwise normal disk, it may have allocated every sector on that disk that Commodore DOS hadn't already claimed, making a file many times larger than could ever fit into the computer's RAM... but if the program responsible for doing it hadn't yet stored any real files in that filesystem image, there would be no actual data involved.
- Equally, if someone decided that dealing with Commodore DOS' very awkwardly-sized 254-byte file blocks was an intolerable pain in the neck, they might define a USR format that didn't bother with track-&-sector pointers in each sector, using some sort of index file to keep track of 256-byte file blocks instead. Depending on how carried away someone got with a scheme like that, you could end up with data blocks that _never_ contain unused space!
In any case like these, you are at the mercy of whomever implemented the USR file in question; if they didn't bother to keep the CBM DOS directory entry up to date regarding how many blocks it occupies, you have no way whatsoever to even _guess_ how much is in it.
Just to, you know, keep things cheerful and positive. *grin*
|
|
|
Post by gsteemso on Jul 8, 2020 19:21:51 GMT
Also be aware that, unless character attributes are outright disabled, text on the VDC can technically draw from both character sets at once (the "which character set to use" bit, effectively the ninth bit of the character code, is stashed in with the other attributes like foreground colour, BLINK, etc.). I've got absolutely no idea if or how that interacts with whatever BASIC's built-in screen editor expects its working data to look like. I do recall that if you toggle the keyboard between "text" and "graphics" modes, the computer treats your input accordingly -- but unlike with the VIC-II or -IIe, any text already on the screen stays as you typed it, because the VDC has that extra attribute bit to keep track of it with.
|
|
|
Post by gsteemso on Jul 8, 2020 0:50:02 GMT
I'm not following your reasoning, here. The original question is "how do I find the exact size of an arbitrary file?"
In the absence of a programming technique to do it more directly, the only apparent method is to load the entire file into memory and then check to see where it ended. If you needed that RAM for something more important (like whatever your program actually _does_), well, sucks to be you. If you need the length of a file that wouldn't actually _fit_ in RAM, you're even further out of luck.
Your idea is to... I'm not clear, actually. What is it supposed to achieve if you put zeroes all over a disk? Bear in mind that any data written to an unused sector will have been staged in one of the drive's internal RAM buffers, and the whole works written out in one go. There's no reason whatsoever to assume that every Commodore-compatible drive anyone ever sold was configured to zero out the buffer first. So, no matter what you put on the disk ahead of time, it's not going to have any effect on the endings of files.
In any case, the actual problem being asked about involves examining files that already exist. By definition, any changes or improvements anyone could make are out of scope for this thread.
|
|
|
Post by gsteemso on Apr 6, 2020 3:34:48 GMT
As most here have stated, the CBM DOS format has no inherent record of a file's exact size. That said, the directory on a disk does hold a count of how many blocks a specific file should be occupying, so an upper limit on the file's size is straightforward to calculate.
As another fellow just posted, a block on disk holds exactly 254 bytes (a pointer to the next slice of the file uses up two bytes of the raw disk sector holding it). Take 254 bytes * the number of blocks in the file, and there's your upper bound.
Of course, a file rarely ends on an exact multiple of 254 bytes, so the final block isn't usually completely full -- but to find out exactly how much shorter the file really is than that worst-case upper limit, you have to locate and examine the last block of the file to find out how full it is.
The only way to find that last block is to laboriously step through every sector in the file, squinting at the track+sector link stored in it to find the next part. That would take forever if you had to do it in the actual computer, but luckily the drives are programmable; there are utility commands you can give it that will let you do the whole sequence in one of the drive's internal work buffers.
Each track on a floppy disk has a "sector zero", but no CBM floppy has a "track zero". If you encounter a sector which claims its successor is in this nonexistent place, it really means there _is_ no next block -- you're at the last one. The byte that would normally hold the next sector number instead points to the last occupied byte in the sector, in the range [2...255].
There is one minor exception to the foregoing. If you are looking at a random-access (RELative) file, the side-sector structures associated with it hold pointers to every single block it occupies. Finding the last sector of such a file is _much_ faster and simpler, though you do still need to look at it to see how much is unused.
Alas, although the aftermarket GEOS VLIR files incorporate an enormous number of improvements, the actual structure of them leans heavily on the drive's existing, native file formats. In many ways, the format resembles a sheaf of individual SEQuential files, with many of the associated problems.
|
|
|
Post by gsteemso on Mar 1, 2017 8:23:51 GMT
Well, it’s certainly been a while since I had anything to add to this discussion! I am finally getting more or less acclimated to full-time employment (which had not been a sustainable option in the past due to long-term health issues), and so I now have a few minutes per week that I can spend on retrocomputing.
I still intend to move forward with many of the concepts that have been touched upon in this thread; though, as naturally happens when you get several people brainstorming at once, some of them are either implementation details better addressed later on (e.g. VDC raster interrupts), and others might be better served as parts of other projects entirely (such as implementing recently-developed programming languages on 8-bit Commodores).
Having allowed this whole business to percolate through my mind over the past several years, I have reached a few pertinent conclusions:
• I’m confident that it would most likely be possible to keep most or all of the OS in RAM even on an unexpanded 128 — but doing it would leave so little free space for user programs as to be utterly impractical. Thus, using the same kinds of workarounds that contemporaneous OSes used to be lumbered with will be unavoidable. Briefly, the OS will need to be carefully modularized so that only the most often-needed functions are permanently resident, and the remainder are divided up in such a way that most programs can remain fairly performant with only a few of them in context at a time (i.e., the vast majority of user programs MUST be able to perform their core functions without needing to constantly swap OS modules back into view; more specialized operations would normally be infrequent enough that the added swap time wouldn’t really be noticeable to the user).
• With the benefit of several years’ perspective, I can see that the potential project of mine which initiated this whole discussion (a comprehensive system for creating, manipulating, processing, and transmitting streams of Unicoded text files) would be of roughly the same complexity regardless of whether I did it as an integrated suite of programs, or as a full-blown OS. There are surprisingly few factors involved which would lead me, or anyone else for that matter, to choose one path over the other! Here are the most meaningful few I’ve come up with so far:
- Reusability. I’d like to think that I’ve come up with some genuinely innovative details in the course of all this, even if the vast majority of it is merely a judicious assemblage of very bright individual ideas that others have perfected over the past 30-odd years. Framing it all in terms of an OS would make it much easier for me to add new uses to the whole mess in the future, and relatively easy for other people to benefit from the technology base. The latter point is a lot less meaningful than it sounds, though. It looks to me like very few people indeed ever bother to write stuff that runs on someone else’s aftermarket OS. I think the only reason GEOS managed to be an exception was that it was bundled with thousands of C64s right from the factory, and so was not actually an aftermarket product.
- Rigour of specification. If this kind of project is developed in terms of an OS, it HAS to be much more rigorously defined, so it can be usefully factored into relatively orthogonal-of-purpose modules. A standalone suite of programs has no constraints but its own, so if part of the design ends up being a bit less than well-organized, no ill effects would initially be felt… but future attempts to improve that part of the system could easily be so inconvenienced as to end up bogged down in a boring design-cleanup project, so uninteresting to work on that it never got done. In my specific case, this is a point of extreme and terrifying relevance!
• The components needed for this system to be operable _at all_ are:
- The editor.
- A way to define some sort of filter programs.
- A way to apply said filters to random sequential files.
• The enhancements required to lift the system from merely “usable” to actively “useful” are:
- Unicode support, no matter how sharply limited.
- The filters being definable as simple, textual scripts (i.e., not requiring further user actions before being applicable), in a preëxisting computer language widely deployed enough that some users might reasonably be found to already know it, and the editor being able to apply them to data files directly (i.e., without requiring use of an external utility program).
- The editor being able to comfortably handle more than one open file at a time.
- Some means, beyond Sneakernet, of moving random working files to and from other computers. In other words, network access! Because so few users that own network interfaces own the _same_ network interface, this would pretty much HAVE to be done via some sort of plug-in device driver mechanism.
• The further enhancements required to make the system pleasant enough to use that people actually _will_ are:
- A proper file-manipulation interface. This could benefit from also including a proper disk- and drive-manipulation interface, as well as the use of various minor filesystem enhancements that were introduced by GEOS, but neither of those is essential.
- A customizable display. At a minimum, being able to set the colours, and ideally the typeface as well. Being able to adjust the specific limitations of the Unicode feature would also be extremely useful.
- Mouse support.
- While probably less essential than the foregoing, I firmly believe that designing the whole works around what I described in an earlier post as an HPI (a user interface with both graphical and command-line features, harmoniously blended) would greatly enhance its usability.
• The less well-known underlying technologies I would want to make use of are:
- The concept of “resources,” first introduced by Apple in the early 1980s.
- Half a century of advancements in OS shell design. Functionally, this would draw elements from the TOPS-20 executive, the various UNIX-ish command shells, spreadsheets, pseudo-GUIs implemented on a character-cell display by means of modifiable character definitions (what Commodore developers seem to have begun calling a TUI), and a great variety of other things.
- A well-thought-through device driver architecture.
- As mentioned earlier in this thread, a memory-pointer model based on the excellent work of Craig Bruce, possibly expanded upon to accommodate subsequent hardware developments around the world. Memory management in this system would also benefit from concepts I personally first encountered in the classic Mac OS (if, probably, improved quite a lot based on later innovations!) — these were discussed at length in earlier posts in this thread.
- Relatively recent formalizations of software fault-tolerance.
- Likewise recent work that improves data-packet handling through zero-copy techniques.
There are probably more things I could include in these lists, but it’s after midnight and the household are retiring for the night. Any thoughts on this rambling mess?
|
|
|
Post by gsteemso on Feb 27, 2017 20:46:53 GMT
The book "Mapping the 128" by Compute!'s Gazette can probably answer this question in as much detail as you'd ever want, and then some. The descriptions of the KERNAL ROM routines describe the boot sequence in exhaustive detail.
I don't remember the details off the top of my head, but the boot sequence passes through so many indirections and distinct assembly-language stages that I will be genuinely astonished if your problem takes more than a simple JMP (i.e., SYS) to solve.
If you can't readily get hold of a physical copy of the book, there's a pretty good PDFified scan of it on Bombjack.
|
|
|
Post by gsteemso on Feb 27, 2017 20:35:05 GMT
Is there any change if you adjust the RAM refresh register?
|
|
|
Post by gsteemso on Jan 14, 2017 7:16:51 GMT
I am also interested in that answer! I currently have a flat panel monitor (actually, two of them but one has no stand) which will display S-video for the VIC-IIe screen, plus another dozen or so not currently in use which almost all take DVI, and I’ve been watching with great interest the recent developments concerning native VGA generation by the VDC, but I haven’t yet decided on a specific method of converting standard CGA output to any of the suboptimal formats supported by my extensive collection of lightly damaged thrift-store flat panels.
|
|
|
Post by gsteemso on Nov 22, 2016 12:24:26 GMT
"Both SIDs share the normal address" — wait, what? They both get the same values stored into any given register? That would give richer-sounding 3-voice sound rather than the expected 6 voices. And which SID gets selected when the CPU tries to read anything back?
|
|