|
Post by hydrophilic on Nov 8, 2014 10:06:07 GMT
DOSSHELL is a 16-bit app, so you can't play with it on a modern 64-bit versions of Windows. But you can play with it if: * you have an old 32-bit version of Windows (like XP) * you can boot into DOS mode (boot CD etc.)
DOSSHELL is like Windows 0.5 or the GEOS Desktop... it lets you browse files and launch applications, but that is it. It does not offer windowing services or multi-tasking (i.e., programs do not run at the same time) but it does allow task-switching (i.e., you can load 2 or more programs, and switch between them... but only 1 ever runs at a given moment).
For a fully integrated OS that allows classic BASIC and ML programs to run, in addition to new software that accesses new services, you would need to replace all ROMs... the KERNAL and BASIC ROMs... *and* you would need to use the Function ROM socket to hold most of the new services. Using the REU to cache data/drivers sounds like a good idea.
The only alternative I can think would be something like GEOS, where you load everything into RAM (or REU)... but then existing software would not work unless you use the lame GEOS method of rebooting. (Well it works so it is not completely lame... just cumbersome )
Edit OK, gsteemso, I've found the time to read through all of your very thoughtful post. So the above blabbering was about a 'maximal compatible' version but left unanswered the question of UI design. I think a system that uses text mode with custom character set would give a good balance between speed and flexibility. A good example is the Arcade Game Construction Kit... it only runs on a C64, but the same ideas apply to the C128 (either 40 or 80 column). In essence, it is a TUI (Text User Interface) but it uses windows and custom font (and in 40-column, sprites) to allow (limited) graphics as well.
Well the 'limited graphics' of a TUI is the biggest problem with this idea... it is not as flexible as true bitmap mode, but it runs snappy fast on 1 (or 2) MHz CPU. I think most of us know how sluggish GEOS runs on a stock CPU. I would not use a true GUI (i.e., bitmap mode) without a SuperCPU.
In a windowed environment like this, you could use programs (err, apps) in the method most productive to you (the user). Writing a program or document? Switch to fullscreen. Want to copy and paste files? Set up two windows and then use "drag and drop". Play a game? Full screen bitmap (or whatever the game needs).
I like your idea of using Perl... (I have no idea what Lua is)... another idea might be to extend BASIC... make it like Visual BASIC. I can't think of anything that I can do in Perl that I can't do in VB. (Of course, I am not a Perl expert by any means )
This is a very ambitious project for the C128... trying to make it run on multiple machines sounds extremely difficult. Not impossible, but who has the time to test multiple hardware platforms?
OK, now I feel I'm rambling, so I'll shut up... if I've missed any of your points, gsteemso, let me know and I'll give specific feedback.
|
|
|
Post by nonefornow on Nov 11, 2014 1:04:01 GMT
The solution of trying to running in multiple machine to me is secondary. If you have an OS that runs on a stock C128 the additional hardware configurations will have to be handled by specific drivers or add-on pieces. DOSSHELL was differnet from GEOS in the sense that it would let you run any applications written for the PC outside its own shell. Not all PRG applications coul run under GEOS, particularly the one that made use of the kernel, since GEOS took over that to run its own internals. GEOS provided word-processors, spreadsheet etc, to run within its own enviroment. DOSSHELL allowed you to run any other (not Windows) based applications, programs, games.
|
|
|
Post by hydrophilic on Nov 14, 2014 6:42:27 GMT
Right! For 'maximum compatibility' it would need to be like DOS-SHELL. If you get more sophisticated, then you would need custom apps like GeoPaint, GeoWrite, etc...
|
|
|
Post by gsteemso on Nov 29, 2014 1:48:18 GMT
I’ve been thinking about this some more, and some of the points people have raised make a lot of sense. In no particular order, some observations:
(1) The “same binaries on C64 as on C128” thing, I now realize, only works if you intentionally ignore (i.e., refrain from using) many of the 128’s features. Realistically, a proper text processor for actual productive use is probably going to need the VDC anyway, so I’m now back to thinking in 128-specific terms.
(2) The “able to run existing software” thing is a red herring. The (for lack of a better term) “standard” way to run really large or elaborate Commodore programs one after another is to reset the machine in between them, because they usually bypass the [run/stop]-[restore] key combination, so nothing I implement is going to be clunkier than that. Plus, it is inherent in developing the new system that it be capable of things (and thus require other things) that do not currently exist. New software is inevitable. For stuff that already exists, just run it the traditional way.
(3) One drawback to the Commodore DOS is that, with limited exceptions, there is no way to access the middle or end of a file save by first reading through all the stuff that comes before it. User files can be designed to get around that but require manual disk-block management, which can be destroyed by validating the disk. Relative files only work with a fixed record size, and cannot contain arbitrary data.
GEOS improved matters slightly by introducing the VLIR file structure. Basically it expands the concept of a sequential file into the concept of a bundle of sequential files. It has a lot in common with the Mac HFS idea of file forks, except the forks are not named and the structure efficiently accommodates up to 127 of them instead of just two. I think of this as “VLIR 1,” because of the “File Structure” flag stored in the file’s directory entry which always has this value.
An obvious improvement I could invent is “VLIR 2” files, which are basically an extension of VLIR to allow convenient use of and random access to very large documents. The “File Structure” flags in the directory entry and GEOS info sector would be set to $02 instead of $01.
Recall that original, VLIR 1 files are basically an adaptation of a standard sequential file, allowing up to 127 sequential-file “forks” (“records” in GEOS terminology, which is confusing enough when involving relative files in the conversation that I have opted to ignore the precedent). This limit is inconvenient because no facility is provided for random access within a fork, thus influencing developers to use the forks directly instead of using structures within them — and while few applications would need more than three or four forks if each contained a complete set of data, even using all 127 does not allow enough individual data structures for a large file.
VLIR 2 files are to REL files as VLIR 1 files are to SEQ files. Basically, a standard REL-type side-sector structure is constructed and stored for each VLIR-fork chain, including a one-block “hyper side sector” which gives the track and sector of the first side sector (1541/1571) or of the super side sector (1581 and up) for each VLIR-fork chain; if a given entry is unused it is set to a 16-bit zero value. Since the usual directory entry location that would point to the side-sector structure in a real REL file is instead used to point to the info sector, the hyper side sector is stored as block two of the info sector’s chain. There is no notion of “record size,” as there would be with a real REL file. REL files work by being able to tell how many blocks into the data to start looking for a given record, and a VLIR 2 fork works the same way; assuming you know the numeric index into the file of whatever you’re after, dividing by the 254-byte block size will always tell you what sector to look up.
(4) Returning to the target-platform question for a moment, I also cannot help observing that there are a much wider selection of special keys to be had on a C128 than on a C64. If I am to have special hot-key combinations to switch back and forth between the command-line focus and the GUI focus, it is made much more practical with the wider key selection on a C128 keyboard.
(5) The GUI, as has been pointed out, doesn’t work so well at 1 MHz. It’s just too much for a stock machine to do useful work and make it look pretty through pixmap manipulation at the same time. This means that, except for special purposes, a text-mode “pseudo-GUI” is as close as I’m going to get while still having it be useable. Happily, the 80-column screen is EXTREMELY customizable. There are tricks to make stuff look fancier without doing a huge amount of calculation all the time.
|
|
|
Post by hydrophilic on Nov 30, 2014 2:59:14 GMT
Gsteemso, I like you.. BIG IDEAS!
However, I have a slightly different view... maybe we can come to a compromise / Grand Union ?
First, the VLIR structure does not *intrinsically* limit you to 127 'forks'... you can have thousands of them. However GEOS (and Wheels, I believe) will only recognize the first 127. So one method (my preference) is to use the standard VLIR format (type $01) with as many 'forks' as you want... sure GOES/Wheels may fail with 'fork' 128+, but it seems like they would ALWAYS fail with your recommendation (type $02).
Second, Berkley Softworks choose the value of $01 for a good reason... this value is stored in the 'RELative Record Size' field of the CBM directory entry. Normal (real REL files) always have a value of $02 or more... so using $01 was perfect for VLIR files ($00 is used for all non-REL/non-VLIR file types).
Your idea of using $02 might cause problems... but unlikely! First of all, outside of GOES/Wheels, this value is only referenced when the file type is REL (and I think most of us know that GEOS/VLIR files are type USR). Second of all, even with 'real' REL files, a value of $02 for record size is **EXTREMELY** unlikely. For example, the smallest practical REL file that I can imagine would have $03 bytes per record = 1 byte 'key' + 2 byte 'abbreviation'... for example: - DE
- EN
- ES
- FR
(In case it is not obvious, these are just Language codes: German, English, Spanish, French.)
Umm... so type $02 *might* be okay (because I have never seen a REL file with size 2)....
ANYWAY
Another alternative is to use $FF for the 'encoding' (REL record size in standard CBM directories). This should be 'compatible' because a 'real' REL file must have a record size from 2 to 254 (note that $FF is 255... and 255 is *illegal* with REL files).
In summary, - The existing VLIR format (ID 1) can support an infinite number of 'forks'.
- Using any ID other than 1 (for VLIR) will break GEOS/Wheels compatibility.
- Using an ID of 2 may break (a very few) REL files.
- Using an ID of 255 ($FF) should not break REL files... but is not better than 1 (because 255 is NOT compatible with GEOS/Wheels).
OK... that's my opinion... respond
Edit I don't know how to respond to #4...
In regards to #5, I agree! With 40-columns, you need a pure text interface, or at least a TUI... trying to use real bitmap graphics (GUI) is very sluggish without a SuperCPU (an REU can help, but can not *FIX* the problem). /Edit
|
|
|
Post by gsteemso on Dec 1, 2014 2:12:18 GMT
OK, responding. :¬)
I agree with you that the VLIR 1 format can support more than 127 forks, assuming you don’t care about GEOS/Wheels compatibility, but that still doesn’t help with the initial problem I posed. Specifically, how do you access the middle or end of a file without having to read all the stuff that came before it first? It seems obvious to me that the only answer is what I proposed about adding a REL-type side sector structure. If we do that, then GEOS/Wheels compatibility is sunk ANYWAY, so we don’t need to worry about it. I think what you said about using $FF rather than $02 as a file structure flag sort of makes sense, but if VLIR files are always of type USR and not REL regardless, then it doesn’t really matter either, does it? If the file type is not REL, the structure type flag will never be interpreted as a record size.
Regarding the text-mode GUI, I think it’s the only realistic way to make 80 columns work too, not just 40 columns.
|
|
|
Post by hydrophilic on Dec 2, 2014 9:25:36 GMT
Well the 'random access' problem is really beyond the scope of GEOS/VLIR/Wheels. (That is, it effects even 'lame' programmers like me not using REL/VLIR.)
I agree that a 'record size' of 1, 2, or 255 does not really matter for vintage software (they should ignore this because file 'type' is USR not REL for GEOS/Wheels files).
But for 'modern' software, like GOES / Wheels, I think using a value of 1 would provide a limited form of compatibility... using a value of 2 or 255 will never work with GEOS / Wheels.
Personally, I would not introduce a new value unless it adds unavailable possibilities... using 1 still allows an unlimited number of 'forks' so I prefer that... (of course standard GEOS / Wheels will cry like a baby if the requested 'fork' is 128+).
That reminds me... any hardcore GEOS / Wheels hackers around? That is SURELY not me, but it seems you could hack the GEOS / Wheels Kernel to allow more than 127 records per VLIR...
Finally, using a 'type' (REL size) of 255 seems like a sure-fire-way to note VLIR files with more than 127 'forks' (good +1)... however it seems like you would need to modify all existing software to make this work (bad -3). My numbers are arbitrary.... feel free to disagree!
|
|
|
Post by gsteemso on Dec 6, 2014 23:09:58 GMT
Just for the sake of generating further discussion, here is a partial list of utility routines that my proposed writers’ OS might provide. Note that the thing is designed from the get-go to support various types of RAM expansion (really, it’s almost required when coding for the 128, I just took inspiration from Craig Bruce’s work and generalized it), as well as multiprocessing based on IEC serial-bus networking. (Specifically, actual internetworking with TCP/IP or AppleTalk or the like is an add-on layer at a higher abstraction level, not built into the OS like having subordinate processing nodes in your Commodore disk drives is.)
Process, thread and subtask management: - PsNest (spawns a nested process given a reference to code — note that the new process becomes completely separate from the old, which is suspended in favour of the new one à la Craig Bruce’s ACE) - PsNew (spawns an independent concurrent process given a reference to code — note new ps becomes completely separate from old) - PsThNw or “Thread New” (works a bit like UNIX fork() except nothing is copied, though new stack and Zero Page are allocated; adds an execution thread in the current process) - PsReQ or “Re-Queue” (block, suspend, or resume target process (may be self)) - Ps3Q or “Thread Re-Queue” (block, suspend, or resume target thread (may be self))
Callbacks would be treated as asynchronous messages (high-level events; see below).
Memory management: - MemGrab (the allocator function — parameters: size, 8 bits of {relocatable? outbankable? purgeable? executable? writeable? etc.} flags, desired storage type (internal, REU, GeoRAM, disk-swapfile…), etc.) - MemRels or “Release” (deallocate the indicated memory block — use sanity checking!) - MemZoom (dereference and lock the given handle, bringing the object into context if necessary; records prior state of object locality) - MemUnzm or “Un-Zoom” (end zoomlock on affected handle) - MemAsk (returns the amount of memory matching the request — used? free? total potentially available to this process? in this bank? size of expansion attached? total number of directly-executable banks? two parameters: bit field listing all possible types of RAM [internal — TC128 variant, internal — ??? variant, 4× expanded internal, REU, GeoRAM, RAMLink, SuperRAM, etc.], and small unsigned integer indicating nature of query) - MemMkRm or “Make Room” (shuffles/flushes memory to get the biggest possible free block in the current (apparently in-context) bank; parameter for degree of thoroughness: should we move things / page stuff out to REU, bump stuff into other banks, compress stuff, page stuff out to disk, purge things entirely, some combination of these, or what?) - MemSetF or “Set Flag”, MemClrF or “Clear Flag”, MemFliF or “Flip Flag” (administrative functions — given a memory block handle and an 8-bit flag mask, set, clear or flip [invert] the indicated flags) - MemShar or “Share” (administrative function, marks memory as being allocated in 2nd process’ table as well as in 1st’s — reverse operation is a simple MemRels)
Concepts would include External (5-byte), Far (4-byte), Near (3-byte) and Live (2-byte) pointers, as well as Local (in-process), System (machine-wide), and Network handles. Handles would consist of a master-pointer index, process or thread ID number, and controller address on the local network segment (Network handles only), each of which would take 1 byte, or maybe 2 for the master-pointer index. The blocks of master pointers (always Far) would occupy a fixed-position, shortest-possible queue per process.
Soft (software-defined) stack management (tangentially related to the foregoing): - StNew (parameters: how handy do we need to keep it?, backing memory block, handle to Top-of-Stack) - StKill (decommission entire stack) - StPush (would need to guard against stack overflow) - StRsrv or “Reserve” (ditto; for allocating multiple or large objects) - StPull (would need to guard against stack underflow) - StDump (ditto; for deallocating large amounts of data, such as argument lists or whole frames) - StLink (Direct Page parameters: stack handle, frame pointer) - StUnlk or “Unlink” (ditto)
The system stack should only be used where it cannot be avoided, such as for interrupt handling and saved JSR return addresses. In particular, parameter passing and the like should be done through one or more software-defined stacks.
Inter-process communication (IPC) — based on a near-blind message-passing model; return messages have to be explicitly MsgRx()’d; asynchronous messages are received as events (i.e., the appropriate handler is called): - MsgTx or “Transmit” (parameters: destination process/thread, message type (categorized as ‘Well Known’ or private; one byte), message subtype (1 byte), 1–2 handles to further data; always asynchronous) - MsgRx or “Receive” (parameter: expected source process/thread (can be “any”); always blocking) - MsgAx or “Ask” (parameter: expected source process/thread (can be “any”); always asynchronous. Yes, the name is an atrocious pun.)
Interrupt Service Routines (ISRs) — need to be able to patch whatever I install as a “standard” routine, hopefully without slowing it down too badly. Some sort of RAM vector or set thereof that can be intercepted seems the most straightforward. Maybe have several versions of a “standard” ISR depending on what hardware we care about watching? (e.g. RS-232 or light-pen routines and the like — you only want to waste time on them when you’re actually using them.) Also need a standard, sorted queue of some kind for raster interrupts, such that you can insert one in the proper place and the standard handler will set up the next raster interrupt and jump to your service routine. - IRqSet or “Request Set”, IRqAdd or “Request Add”, IRqSubt or “Request Subtract”: Given a bitmask listing all possible maskable interrupt sources, either enable a fixed subset of them, add to the enabled set, or subtract from the enabled set. - INMSet or “Non-Maskable Set”, INMAdd or “Non-Maskable Add”, INMSubt or “Non-Maskable Subtract”: Given a bitmask listing all possible non-maskable interrupt sources, either enable a fixed subset of them, add to the enabled set, or subtract from the enabled set. - ISetSR or “Set Service Routine”: Given a code pointer (handle?) and an interrupt-source number, install an ISR. Does not automatically ENABLE said ISR. - IGetSR or “Get Service Routine”: Given an interrupt-source number, returns the current ISR vector associated with it. This allows chaining of service routines. Service routines should by convention begin with a jump or branch past the next-ISR JMP address and end by jumping or branching back to that same next-ISR JMP address, in order to allow automated removal of ISRs from the middle of the sequence. - IRasAdd or “Raster Add”: Insert an interrupt on the given VIC-IIe raster that will call the given interrupt service callback. The next raster interrupt in the ordered queue will automatically be set up prior to the callback being taken. - IRasDel or “Raster Delete”: Remove the given raster from the ordered queue of those that will trigger an interrupt.
I freely admit I haven’t thought the interrupt handling completely through yet. The above section on ISRs is extremely subject to revision.
Device I/O and the Filesystem — Not sure exactly what routines would be needed here, nor how they would be divided between “I/O in general” and “The Filesystem”. I can say that the filesystem would be closely based on the GEOS extensions to CBM DOS, in order to maintain some kind of compatibility with existing tools. Device I/O would likewise be closely based on the Commodore Kernal model, wherein everything is a file. However, when you combine the two like that you get a kind of “streamed fork” model, to modify HFS terminology. Apart from a binary flag for whether a given file, or fork of a file, is seekable, all I/O would be fundamentally equal as far as user software is concerned. (The screen and the keyboard buffer are seekable. Relative files and the individual forks in my proposed VLIR 2 construct are seekable. Everything else is not, with the possible exception of USR files accessed via custom plug-ins that would tell the system how to find things in the user-defined file structures.)
There would also need to be some bundle of system routines akin to the old Classic Mac OS Resource Manager, allowing arbitrary program data of various standardized types to be simply and conveniently accessed by name or ID number without knowing any details of its storage, whether that would be on disk or temporarily in memory. I think there would need to be some requirement that any custom resource-data type include a machine-parseable template that explains how to make sense of it; otherwise you end up with opaque binary blobs that are less helpful than might otherwise be the case.
There are other things I would need to provide (a relocating loader, for example) but these are a good base for discussion.
|
|
|
Post by hydrophilic on Dec 7, 2014 1:49:43 GMT
Wow lots of ideas! To go backwards for a second, either GEOS VLIR files or REL files would be needed for random access to data. Unfortunately both have serious issues for a generic OS. To respect your thread, I'll shut up now and post a separate one I get the 'PsNew', 'PsThNw', 'PsReQ', and 'Ps3Q'... they just create new or 're-schedule' existing threads and processes. I don't understand PsNest... so it creates a new process (like PsNew), but it does not run concurrently (unlike PsNew)? So, it is a utility function = PsNew + PsReQ(suspend self) ? If that is correct, then the name 'PsNest' makes sense (to me) only if the original (suspended) process resumes when the new process ends... you didn't mention that, but is that how it works? MemGrab / MemRels... simple in principle. I guess the main issue is the USEFULL flags of MemGrab? 'outbankable' seems reasonable (able to save in other bank, REU, file... I assume) ? 'relocatable' seems a bit dubious... I mean it sounds like a useful feature, but wouldn't you need some form of call-back whenever the OS decides to relocate (so the user(s) can update pointers)? What exactly does 'purgeable' mean? The OS can delete it at anytime?... or relocate it anytime? Does 'purgeable' require/imply 'relocatable' ? More importantly (perhaps) are flags like 'executable' and 'writeable'... because stock hardware has no memory protection features, this may involve a lot of (unreliable) overhead in the OS to work semi-well... or it could involve little OS overhead at the risk of being mostly unreliable. Do you have thoughts on this? So Mem(un)Zoom essentially (un)locks a memory region... except that relocation may be done with MemZoom? I guess these only work with 'relocatable' memory? MemMkRm sounds like a mess... almost surely required in a sophisticated OS, but like you point out, many options to consider. Anything that isn't 'locked' with MemZoom should be moved out of target memory unless (of course) it has 'unmoveable' flags? I'm not sure which combination of 'relocatable' / 'purgeable' / 'outbankable' flags apply in this case... ALSO, not everything/anything that can be moved should be moved... be practical! Once you've opened enough memory to satisfy the request, then quit. MemSetF... sounds simple. MemShar... I guess this would mark the memory 'locked' when any of the shared processes are 'active'? Could you elaborate on pointers? I assume I understand Live (2-byte). I guess Near (3-byte) refers to any Bank in system RAM? Or perhaps any Bank in REU RAM? So what is Far (4-byte)?... Disk? And then what is External (5-byte)?... Network? The whole stack thing... it could be FAST using C128 stack relocation... but no real way to set the size. OR it could be slow (but flexible) using software stack like you say. I guess this is what you mean because you have StPush and StPull (etc). These would need to be VERY simple to provide acceptable performance at 2MHz.... anybody using a VIC application (1MHz) might fall asleep (of course it depends on how much an app uses the stack). IPC stuff sounds reasonable... for best performance, I suggest only 1 handle... it can hold a block of parameters if needed. As far as IRQs, you go into detail about Rasters... this is good for VIC-II but would be tricky with VDC (of course Risen From Oblivion proves not impossible). All I can say is each IRQ routine needs a 'head' and 'tail' vector so that you can (re)chain IRQs as routines are inserted/removed from the list... For file system, it seems you want GEOS compatibility? In that case VLIR is your friend... but it will bloat the OS which must 'know' or 'interrogate' a device for Track/Sector/BAM/Directory structure. It is impossible to implement on a 'raw' medium, like an SD card, CD-ROM, DVD-ROM, BR-ROM... probably on an abstracted medium too (like network file). I still don't understand your VLIR 2... I think for this project the best scheme would be an ugly (pretty?) amalgam of VLIR + REL... So like VLIR, you can have an unlimited number of forks (or max 127 if you demand GEOS compatibility)... and each fork could be accessed at random based on an arbitrary 'record' size (like 128 or 254). In other words, each fork (from VLIR[2] block) would point to a REL-ish side-sector... wow, this is a lot like HPS / NTFS! Very ambitious for an 8-bit! Hahaha... I said RELish side sector! (Yummy) I really don't understand the 'bundle of system routines akin to the old Clasic Mac OS Resource Manager'... I guess this is a virtualized / in-memory version of the VLIR-2 structure I described above (but may not be what you call VLIR-2... your VLIR-2 is still fuzzy to me).
|
|
|
Post by gsteemso on Dec 7, 2014 7:16:04 GMT
No need to go to that extreme. I did mention those issues a couple of posts ago. You kind of got hung up on how the number of forks isn’t really limited to 127 and skipped over everything else I mentioned.
That is it exactly, yes.
‘Outbankable’ does indeed mean that. It’s like ‘relocatable’ on steroids… in addition to being moveable within its current bank, it can also be bumped to a different bank or to a different (slower to be accessed) type of memory entirely, say to REU space or a disk swapfile.
‘Relocatable’ sounds like a lot of effort to go to, but is absolutely vital if you want to avoid getting wedged due to memory fragmentation. After a program has been running for a while, in the pathological case you might have 48 KiB free but it’s all broken up into a bajillion little chunks due to some memory still being allocated in between. If most of that still-allocated memory is relocatable, a call to MemGrab would cause all of the still-allocated chunks to be moved to one contiguous region in RAM, coalescing most or all of that free 48KiB into one big, useful lump. The classic Mac OS worked like that and it worked quite well. Rather than some hairy callback scheme like you describe, all you need to do is reference memory via what are called handles instead of via simple pointers. A handle is a pointer to another pointer… the pointed-to pointer, called a ‘Master Pointer’, is locked (which means it is neither relocatable nor purgeable), and whenever the actual blob of data gets relocated, the master pointer is updated appropriately. That way you can have as many handles in your data as you want, and they all point to the fixed-position master pointer which is kept current by the memory manager.
‘Purgeable’ simply means that the data is known in advance to not be used very often and to not be writeable, so if the system gets a bit short of space it can just erase the block in question and set a flag in the master pointer. Then, if the datum is in fact referenced again, the resource management system simply reloads it from wherever it originated (usually a data file on disk, but it could just as easily be in battery-backed expansion RAM or the like). In the old Mac OS purgeability was a concept from the resource manager and had no particular bearing on the memory manager, but with how slow a 2MHz machine can feel, the two might need to be a little more closely intertwined. That said, purgeability and relocatability are largely orthogonal.
My thoughts on enforcement of ‘writeable’ and ‘executable’ are pretty straightforward. Here are the relevant passages from my notes:
Security concerns: There is no hardware-level protection mechanism of any sort. Erroneous or malicious code _will_ compromise the system, so reliable intrusion detection is the only recourse. Due to the nature of IEC networking, all nodes on the local segment are necessarily of equal trustworthiness and must be considered to have the same user (or mutually benevolent group thereof). That said, remote users are entirely possible via gateway units and the like; therefore, some consideration of operational security is necessary, beyond the obvious concern that random code off the Internet might do nasty things to the system, whether by accident or by design.
Partial solution: Periodic task that verifies system components by CRC (Cyclic Redundancy Check), one per time-slice, on a rota (always checks itself first; failure there is an automatic system panic). Include a system-wide hot key (chickenhead-restore?) that checks everything and reloads failed modules from disk. Maybe make it happen automatically on check failure? Eventually, at least. Don't want to get in the habit of ignoring serious bugs because things don't always stay corrupted.
I seem to have skipped over rather a lot of detail that was old knowledge to me but would not be such to most other people. My apologies.
The pointers that take up different numbers of bytes thing is based on the rather clever memory management model Craig Bruce came up with for his text editor, ZEd, which he later refined for his ACE OS. Basically, in an unexpanded C64, you might have a two-byte pointer, and that suffices for most needs. You could add a third byte to specify whether you want RAM or ROM, or perhaps something more exotic like GeoRAM or an REU. In a C128 you also might be referring to internal or external function ROM. Of course, we’re programming for a 128, not just a 64. Now that you’ve opened those possibilities, suddenly you need a fourth byte to indicate the bank within the C128, GeoRAM or REU. I took it one step further and added a fifth byte to specify which of the nodes on the local IEC segment holds the RAM in question. Imagine, if you will, a bank of three or four 128s and a CMD hard drive, acting as a cluster computer!
Basically, a Live pointer refers to an address within the active bank (the 64 KiB that the CPU sees right now). A Near pointer is similar but can refer to any bank in the current type of RAM (e.g., internal C128 bank or REU bank). A Far pointer can refer to other types of RAM as well, say if you want to refer to something in the REU from main RAM. Then, of course, an External pointer can identify memory in any node on the local network segment. All of these are useful in certain circumstances and not so useful (being either overkill or inadequate) in others.
Not really, no. ‘Locked’ is a short way of saying ‘neither relocatable nor purgeable.’ One thing an OS needs to keep track of is what memory is allocated and what is not. That depends what program you're looking at. Suppose I allocate a block of memory, and tell another program about it. Then I finish what I was doing and terminate myself. Suddenly the other program has a dangling pointer to the memory I told it about before I quit. Thus, MemShar. If I call that, the shared block is no longer unallocated when I stop executing.
There are practical concerns here too. If we implement basic sanity-checking such that a program can only write to its own ‘zone’ of memory, suddenly we can’t access shared memory blocks. Okay, we’ll just require that they be relocatable and sharing them causes them to be moved to a special region of RAM where we explicitly allow writes even though it’s not within a program’s own private memory space. Sanity checking slows things down a lot, so this one may not be a concern, but it’s still worth considering if the resource manager or something tries to implement it just on system calls.
Agreed about the stack relocation. It needs to be tried both ways before we can say which works better. I fully intend that VIC applications would use “fast borders” to achieve ≈1.3MHz operation, so it wouldn’t be quite so dire as all that, but I agree it would need to be looked at.
AFAIK the VDC does not have a built-in raster interrupt (demos do what they do by careful CIA timing), so this is a false concern. I only specifically mention the VIC-IIe raster interrupt because it is already halfway implemented by the Kernal, and not providing similar functionality would be regressive.
What you say about “head” and “tail” vectors is pretty much what I meant when talking about how an ISR would need to have its exit vector right after the beginning so it could be identified by software. Trust you to find a much simpler way to describe it… *grin*
It is not actually impossible on a ‘raw’ or abstracted medium. You would just make a directory with extension “.VLIR” and its contents would be the various VLIR forks, named with numbers so as to keep them straight. This is no different in concept from flat files with extensions “.SEQ” or “.PRG”.
VLIR 2 is exactly as you describe, like VLIR 1 but with a side-sector structure for each fork. I honestly didn’t think I’d done that poor a job of explaining it! There would be no need for a record size; once you worked out where in the fork you wanted to access, you’d just divide by 254 bytes per sector and use the side-sector structure to look up the sector in question. REL files only need a notion of ‘record size’ because they keep track of that position in the file based on a record number, a concept which we would not be making use of.
The concept of “resources” is largely orthogonal to the concept of a VLIR file, though having VLIR makes resources much easier to implement. A resource is a numbered and potentially named fragment of data, having a known format. In the classic Mac OS they were basically a small subsidiary filesystem stored in the “resource fork” of a file. The resource fork was divided into sections according to a resource’s “type”, which was a 32-bit quantity uniquely identifying the format of the data in question. The 32-bit resource type was usually specified as a 4-character MacRoman ASCII string for simplicity. For example, executable code was stored in resources of type ‘CODE’, pull-down menus were defined by resources of type ‘MENU’, and the program’s icons were stored in a set of six standardized icon resources (black and white, 16-colour and 256-colour, in 16x16-pixel and 32x32-pixel sizes). The black and white resources also included a mask that defined what parts of the icon were clickable with the mouse, so they didn’t have to be always full-sized squares.
Once you knew what type of resource you wanted, you would access the specific one you were after by knowing its number. Later the concept of named resources made life a little easier. There were system routines for finding, accessing, purging, isolating, and doing various other maintenance tasks on individual resources. Something akin to this system would be of enormous utility in a new OS, though there are lessons to be learned from what did and didn’t work as well as hoped in the old Mac OS implementation.
|
|