|
Post by gsteemso on Jan 6, 2015 23:13:48 GMT
I was taking the view that everything would work the same regardless of whether you are using VIC or VDC, with the trivial difference that you’d get an error if you tried to set a raster interrupt on VDC. Application programs would need to special case between the two display types, but since they have very different colour handling, they’d need to do that anyway for any kind of detailed graphics work.
I’ve recently gotten onto a bit of a “learn new programming languages” kick. I keep getting sidetracked by wanting to implement Lua or Rust on the C128… Rust would be a good language to implement the OS in if I wasn’t going to do it in assembler, and Lua is designed as an extension language, so would be pretty good for user coding. Not ideal, though. It’s less of a free-form language than I like.
|
|
|
Post by gsteemso on Dec 25, 2014 22:46:32 GMT
It’s got my vote!
|
|
|
Oxymorons
Dec 15, 2014 15:54:07 GMT
via mobile
Post by gsteemso on Dec 15, 2014 15:54:07 GMT
Movie theatre operators everywhere should be embarassed by this one, though I admit it isn’t computer-related: “Evening Matinée”
Another old favourite is “military intelligence.”
|
|
|
Post by gsteemso on Dec 10, 2014 3:39:21 GMT
I’m not following your reasoning. Why would I need to split the OS along two development forks? It doesn’t matter whether you want to use VIC-IIe or VDC, they’re both supported the same way except for the business with the raster interrupt, and I have no intention of tying up an entire CIA timer just for that!
|
|
|
Post by gsteemso on Dec 8, 2014 15:39:23 GMT
Hum. Questions. Happily, answers exist. The classic Mac OS generated pointers and handles by Memory Manager system calls… MemGrab would fulfill this function, though I’m sure extra functions to do things like generate a handle given a specific pointer would be useful as well. I am trying to avoid duplicating the functionality of all 10 thousand Classic Mac OS system routines, so I tried to keep it super simple in my description.
You understand correctly the nature of the four sizes of pointer.
As far as shared memory being “locked”… Not exactly. “Locked” is a specific technical term with a specific meaning—a memory block that is locked cannot be moved or purged (nor can it be outbanked, for that matter). If a shared memory block was flagged as moveable or purgeable, there is no reason it could not be moved or purged, as the master pointer would be updated no matter what program was doing it. You are using “locked” to mean “not deallocatable”, which is true in this case but not what I think of when I see the term “locked”. There would be reference count information for how many processes are sharing the memory, and every time it gets MemRels’d by one of them that count would decrease by one. Only when it reached zero would the memory actually be reclaimed. I’d have to be very careful how I design the data structures to prevent a memory leak in the case of one of the sharing programs crashing before it could release the data block, but it is not an insurmountable problem.
You raise an interesting point about emulating raster interrupts on the VDC, but I’m not sure how practical that would be. As I understand matters, the timing is SO ridiculously fiddly that it really is not very feasible to implement as a permanent OS service. Apart from anything else, it would permanently tie up one of the CIA timers, which is very far from ideal—this is an OS, it needs the CIAs for things like RS-232 and IEC serial-bus access, potentially both at the same time.
That’s exactly what I was thinking to emulate a VLIR file on a non-direct-sectored medium, yes.
Your summary of VLIR 2 files is, in fact, potentially an improvement over what I had originally thought up. I’d originally meant that each ‘fork’ chain in the file would be a linear file as per normal, but there would be what I called a “hyper side sector” containing starting links to all the side sector chains as well. Since there’s no obvious place to put such a thing, I’d thought it could be scabbed on as the second block of the info sector’s chain. What you suggest is much more straightforward, at the trivial cost of having to initially read one extra sector link if you want to read out the entire fork à la VLIR 1.
And yes, random access to any specific byte within each fork was the ENTIRE motivation for developing VLIR 2 files.
I said the concept of resources in general was orthogonal to the VLIR structure, not what you misquote me as. As you correctly point out, having a VLIR structure to work with is a natural match for IMPLEMENTING resources. It seems we are in violent agreement, as a friend of mine once put it. :¬)
|
|
|
Post by gsteemso on Dec 7, 2014 7:16:04 GMT
No need to go to that extreme. I did mention those issues a couple of posts ago. You kind of got hung up on how the number of forks isn’t really limited to 127 and skipped over everything else I mentioned.
That is it exactly, yes.
‘Outbankable’ does indeed mean that. It’s like ‘relocatable’ on steroids… in addition to being moveable within its current bank, it can also be bumped to a different bank or to a different (slower to be accessed) type of memory entirely, say to REU space or a disk swapfile.
‘Relocatable’ sounds like a lot of effort to go to, but is absolutely vital if you want to avoid getting wedged due to memory fragmentation. After a program has been running for a while, in the pathological case you might have 48 KiB free but it’s all broken up into a bajillion little chunks due to some memory still being allocated in between. If most of that still-allocated memory is relocatable, a call to MemGrab would cause all of the still-allocated chunks to be moved to one contiguous region in RAM, coalescing most or all of that free 48KiB into one big, useful lump. The classic Mac OS worked like that and it worked quite well. Rather than some hairy callback scheme like you describe, all you need to do is reference memory via what are called handles instead of via simple pointers. A handle is a pointer to another pointer… the pointed-to pointer, called a ‘Master Pointer’, is locked (which means it is neither relocatable nor purgeable), and whenever the actual blob of data gets relocated, the master pointer is updated appropriately. That way you can have as many handles in your data as you want, and they all point to the fixed-position master pointer which is kept current by the memory manager.
‘Purgeable’ simply means that the data is known in advance to not be used very often and to not be writeable, so if the system gets a bit short of space it can just erase the block in question and set a flag in the master pointer. Then, if the datum is in fact referenced again, the resource management system simply reloads it from wherever it originated (usually a data file on disk, but it could just as easily be in battery-backed expansion RAM or the like). In the old Mac OS purgeability was a concept from the resource manager and had no particular bearing on the memory manager, but with how slow a 2MHz machine can feel, the two might need to be a little more closely intertwined. That said, purgeability and relocatability are largely orthogonal.
My thoughts on enforcement of ‘writeable’ and ‘executable’ are pretty straightforward. Here are the relevant passages from my notes:
Security concerns: There is no hardware-level protection mechanism of any sort. Erroneous or malicious code _will_ compromise the system, so reliable intrusion detection is the only recourse. Due to the nature of IEC networking, all nodes on the local segment are necessarily of equal trustworthiness and must be considered to have the same user (or mutually benevolent group thereof). That said, remote users are entirely possible via gateway units and the like; therefore, some consideration of operational security is necessary, beyond the obvious concern that random code off the Internet might do nasty things to the system, whether by accident or by design.
Partial solution: Periodic task that verifies system components by CRC (Cyclic Redundancy Check), one per time-slice, on a rota (always checks itself first; failure there is an automatic system panic). Include a system-wide hot key (chickenhead-restore?) that checks everything and reloads failed modules from disk. Maybe make it happen automatically on check failure? Eventually, at least. Don't want to get in the habit of ignoring serious bugs because things don't always stay corrupted.
I seem to have skipped over rather a lot of detail that was old knowledge to me but would not be such to most other people. My apologies.
The pointers that take up different numbers of bytes thing is based on the rather clever memory management model Craig Bruce came up with for his text editor, ZEd, which he later refined for his ACE OS. Basically, in an unexpanded C64, you might have a two-byte pointer, and that suffices for most needs. You could add a third byte to specify whether you want RAM or ROM, or perhaps something more exotic like GeoRAM or an REU. In a C128 you also might be referring to internal or external function ROM. Of course, we’re programming for a 128, not just a 64. Now that you’ve opened those possibilities, suddenly you need a fourth byte to indicate the bank within the C128, GeoRAM or REU. I took it one step further and added a fifth byte to specify which of the nodes on the local IEC segment holds the RAM in question. Imagine, if you will, a bank of three or four 128s and a CMD hard drive, acting as a cluster computer!
Basically, a Live pointer refers to an address within the active bank (the 64 KiB that the CPU sees right now). A Near pointer is similar but can refer to any bank in the current type of RAM (e.g., internal C128 bank or REU bank). A Far pointer can refer to other types of RAM as well, say if you want to refer to something in the REU from main RAM. Then, of course, an External pointer can identify memory in any node on the local network segment. All of these are useful in certain circumstances and not so useful (being either overkill or inadequate) in others.
Not really, no. ‘Locked’ is a short way of saying ‘neither relocatable nor purgeable.’ One thing an OS needs to keep track of is what memory is allocated and what is not. That depends what program you're looking at. Suppose I allocate a block of memory, and tell another program about it. Then I finish what I was doing and terminate myself. Suddenly the other program has a dangling pointer to the memory I told it about before I quit. Thus, MemShar. If I call that, the shared block is no longer unallocated when I stop executing.
There are practical concerns here too. If we implement basic sanity-checking such that a program can only write to its own ‘zone’ of memory, suddenly we can’t access shared memory blocks. Okay, we’ll just require that they be relocatable and sharing them causes them to be moved to a special region of RAM where we explicitly allow writes even though it’s not within a program’s own private memory space. Sanity checking slows things down a lot, so this one may not be a concern, but it’s still worth considering if the resource manager or something tries to implement it just on system calls.
Agreed about the stack relocation. It needs to be tried both ways before we can say which works better. I fully intend that VIC applications would use “fast borders” to achieve ≈1.3MHz operation, so it wouldn’t be quite so dire as all that, but I agree it would need to be looked at.
AFAIK the VDC does not have a built-in raster interrupt (demos do what they do by careful CIA timing), so this is a false concern. I only specifically mention the VIC-IIe raster interrupt because it is already halfway implemented by the Kernal, and not providing similar functionality would be regressive.
What you say about “head” and “tail” vectors is pretty much what I meant when talking about how an ISR would need to have its exit vector right after the beginning so it could be identified by software. Trust you to find a much simpler way to describe it… *grin*
It is not actually impossible on a ‘raw’ or abstracted medium. You would just make a directory with extension “.VLIR” and its contents would be the various VLIR forks, named with numbers so as to keep them straight. This is no different in concept from flat files with extensions “.SEQ” or “.PRG”.
VLIR 2 is exactly as you describe, like VLIR 1 but with a side-sector structure for each fork. I honestly didn’t think I’d done that poor a job of explaining it! There would be no need for a record size; once you worked out where in the fork you wanted to access, you’d just divide by 254 bytes per sector and use the side-sector structure to look up the sector in question. REL files only need a notion of ‘record size’ because they keep track of that position in the file based on a record number, a concept which we would not be making use of.
The concept of “resources” is largely orthogonal to the concept of a VLIR file, though having VLIR makes resources much easier to implement. A resource is a numbered and potentially named fragment of data, having a known format. In the classic Mac OS they were basically a small subsidiary filesystem stored in the “resource fork” of a file. The resource fork was divided into sections according to a resource’s “type”, which was a 32-bit quantity uniquely identifying the format of the data in question. The 32-bit resource type was usually specified as a 4-character MacRoman ASCII string for simplicity. For example, executable code was stored in resources of type ‘CODE’, pull-down menus were defined by resources of type ‘MENU’, and the program’s icons were stored in a set of six standardized icon resources (black and white, 16-colour and 256-colour, in 16x16-pixel and 32x32-pixel sizes). The black and white resources also included a mask that defined what parts of the icon were clickable with the mouse, so they didn’t have to be always full-sized squares.
Once you knew what type of resource you wanted, you would access the specific one you were after by knowing its number. Later the concept of named resources made life a little easier. There were system routines for finding, accessing, purging, isolating, and doing various other maintenance tasks on individual resources. Something akin to this system would be of enormous utility in a new OS, though there are lessons to be learned from what did and didn’t work as well as hoped in the old Mac OS implementation.
|
|
|
Post by gsteemso on Dec 6, 2014 23:09:58 GMT
Just for the sake of generating further discussion, here is a partial list of utility routines that my proposed writers’ OS might provide. Note that the thing is designed from the get-go to support various types of RAM expansion (really, it’s almost required when coding for the 128, I just took inspiration from Craig Bruce’s work and generalized it), as well as multiprocessing based on IEC serial-bus networking. (Specifically, actual internetworking with TCP/IP or AppleTalk or the like is an add-on layer at a higher abstraction level, not built into the OS like having subordinate processing nodes in your Commodore disk drives is.)
Process, thread and subtask management: - PsNest (spawns a nested process given a reference to code — note that the new process becomes completely separate from the old, which is suspended in favour of the new one à la Craig Bruce’s ACE) - PsNew (spawns an independent concurrent process given a reference to code — note new ps becomes completely separate from old) - PsThNw or “Thread New” (works a bit like UNIX fork() except nothing is copied, though new stack and Zero Page are allocated; adds an execution thread in the current process) - PsReQ or “Re-Queue” (block, suspend, or resume target process (may be self)) - Ps3Q or “Thread Re-Queue” (block, suspend, or resume target thread (may be self))
Callbacks would be treated as asynchronous messages (high-level events; see below).
Memory management: - MemGrab (the allocator function — parameters: size, 8 bits of {relocatable? outbankable? purgeable? executable? writeable? etc.} flags, desired storage type (internal, REU, GeoRAM, disk-swapfile…), etc.) - MemRels or “Release” (deallocate the indicated memory block — use sanity checking!) - MemZoom (dereference and lock the given handle, bringing the object into context if necessary; records prior state of object locality) - MemUnzm or “Un-Zoom” (end zoomlock on affected handle) - MemAsk (returns the amount of memory matching the request — used? free? total potentially available to this process? in this bank? size of expansion attached? total number of directly-executable banks? two parameters: bit field listing all possible types of RAM [internal — TC128 variant, internal — ??? variant, 4× expanded internal, REU, GeoRAM, RAMLink, SuperRAM, etc.], and small unsigned integer indicating nature of query) - MemMkRm or “Make Room” (shuffles/flushes memory to get the biggest possible free block in the current (apparently in-context) bank; parameter for degree of thoroughness: should we move things / page stuff out to REU, bump stuff into other banks, compress stuff, page stuff out to disk, purge things entirely, some combination of these, or what?) - MemSetF or “Set Flag”, MemClrF or “Clear Flag”, MemFliF or “Flip Flag” (administrative functions — given a memory block handle and an 8-bit flag mask, set, clear or flip [invert] the indicated flags) - MemShar or “Share” (administrative function, marks memory as being allocated in 2nd process’ table as well as in 1st’s — reverse operation is a simple MemRels)
Concepts would include External (5-byte), Far (4-byte), Near (3-byte) and Live (2-byte) pointers, as well as Local (in-process), System (machine-wide), and Network handles. Handles would consist of a master-pointer index, process or thread ID number, and controller address on the local network segment (Network handles only), each of which would take 1 byte, or maybe 2 for the master-pointer index. The blocks of master pointers (always Far) would occupy a fixed-position, shortest-possible queue per process.
Soft (software-defined) stack management (tangentially related to the foregoing): - StNew (parameters: how handy do we need to keep it?, backing memory block, handle to Top-of-Stack) - StKill (decommission entire stack) - StPush (would need to guard against stack overflow) - StRsrv or “Reserve” (ditto; for allocating multiple or large objects) - StPull (would need to guard against stack underflow) - StDump (ditto; for deallocating large amounts of data, such as argument lists or whole frames) - StLink (Direct Page parameters: stack handle, frame pointer) - StUnlk or “Unlink” (ditto)
The system stack should only be used where it cannot be avoided, such as for interrupt handling and saved JSR return addresses. In particular, parameter passing and the like should be done through one or more software-defined stacks.
Inter-process communication (IPC) — based on a near-blind message-passing model; return messages have to be explicitly MsgRx()’d; asynchronous messages are received as events (i.e., the appropriate handler is called): - MsgTx or “Transmit” (parameters: destination process/thread, message type (categorized as ‘Well Known’ or private; one byte), message subtype (1 byte), 1–2 handles to further data; always asynchronous) - MsgRx or “Receive” (parameter: expected source process/thread (can be “any”); always blocking) - MsgAx or “Ask” (parameter: expected source process/thread (can be “any”); always asynchronous. Yes, the name is an atrocious pun.)
Interrupt Service Routines (ISRs) — need to be able to patch whatever I install as a “standard” routine, hopefully without slowing it down too badly. Some sort of RAM vector or set thereof that can be intercepted seems the most straightforward. Maybe have several versions of a “standard” ISR depending on what hardware we care about watching? (e.g. RS-232 or light-pen routines and the like — you only want to waste time on them when you’re actually using them.) Also need a standard, sorted queue of some kind for raster interrupts, such that you can insert one in the proper place and the standard handler will set up the next raster interrupt and jump to your service routine. - IRqSet or “Request Set”, IRqAdd or “Request Add”, IRqSubt or “Request Subtract”: Given a bitmask listing all possible maskable interrupt sources, either enable a fixed subset of them, add to the enabled set, or subtract from the enabled set. - INMSet or “Non-Maskable Set”, INMAdd or “Non-Maskable Add”, INMSubt or “Non-Maskable Subtract”: Given a bitmask listing all possible non-maskable interrupt sources, either enable a fixed subset of them, add to the enabled set, or subtract from the enabled set. - ISetSR or “Set Service Routine”: Given a code pointer (handle?) and an interrupt-source number, install an ISR. Does not automatically ENABLE said ISR. - IGetSR or “Get Service Routine”: Given an interrupt-source number, returns the current ISR vector associated with it. This allows chaining of service routines. Service routines should by convention begin with a jump or branch past the next-ISR JMP address and end by jumping or branching back to that same next-ISR JMP address, in order to allow automated removal of ISRs from the middle of the sequence. - IRasAdd or “Raster Add”: Insert an interrupt on the given VIC-IIe raster that will call the given interrupt service callback. The next raster interrupt in the ordered queue will automatically be set up prior to the callback being taken. - IRasDel or “Raster Delete”: Remove the given raster from the ordered queue of those that will trigger an interrupt.
I freely admit I haven’t thought the interrupt handling completely through yet. The above section on ISRs is extremely subject to revision.
Device I/O and the Filesystem — Not sure exactly what routines would be needed here, nor how they would be divided between “I/O in general” and “The Filesystem”. I can say that the filesystem would be closely based on the GEOS extensions to CBM DOS, in order to maintain some kind of compatibility with existing tools. Device I/O would likewise be closely based on the Commodore Kernal model, wherein everything is a file. However, when you combine the two like that you get a kind of “streamed fork” model, to modify HFS terminology. Apart from a binary flag for whether a given file, or fork of a file, is seekable, all I/O would be fundamentally equal as far as user software is concerned. (The screen and the keyboard buffer are seekable. Relative files and the individual forks in my proposed VLIR 2 construct are seekable. Everything else is not, with the possible exception of USR files accessed via custom plug-ins that would tell the system how to find things in the user-defined file structures.)
There would also need to be some bundle of system routines akin to the old Classic Mac OS Resource Manager, allowing arbitrary program data of various standardized types to be simply and conveniently accessed by name or ID number without knowing any details of its storage, whether that would be on disk or temporarily in memory. I think there would need to be some requirement that any custom resource-data type include a machine-parseable template that explains how to make sense of it; otherwise you end up with opaque binary blobs that are less helpful than might otherwise be the case.
There are other things I would need to provide (a relocating loader, for example) but these are a good base for discussion.
|
|
|
Post by gsteemso on Dec 1, 2014 2:12:18 GMT
OK, responding. :¬)
I agree with you that the VLIR 1 format can support more than 127 forks, assuming you don’t care about GEOS/Wheels compatibility, but that still doesn’t help with the initial problem I posed. Specifically, how do you access the middle or end of a file without having to read all the stuff that came before it first? It seems obvious to me that the only answer is what I proposed about adding a REL-type side sector structure. If we do that, then GEOS/Wheels compatibility is sunk ANYWAY, so we don’t need to worry about it. I think what you said about using $FF rather than $02 as a file structure flag sort of makes sense, but if VLIR files are always of type USR and not REL regardless, then it doesn’t really matter either, does it? If the file type is not REL, the structure type flag will never be interpreted as a record size.
Regarding the text-mode GUI, I think it’s the only realistic way to make 80 columns work too, not just 40 columns.
|
|
|
Post by gsteemso on Nov 29, 2014 2:00:18 GMT
Thinking about it, one thing I would definitely upgrade in my “dream ROMset” is the machine language monitor. I am firmly of the opinion that an MLM which can neither assemble nor disassemble code in memory is pretty useless.
|
|
|
Post by gsteemso on Nov 29, 2014 1:56:27 GMT
What part of JD is not working for you? I know that there are a couple problems with the version for the flat C128 where CTRL+D doesn't work and the CTRL key doesn't pause scrolling. I told Jim Brain how to fix those issues but, I'm not sure if he made those changes or not. The listing-stopper and listing-slowdown functions don’t work, and I think there was something wrong with the SEQ file viewer as well. I keep being unpleasantly surprised by these issues and then forgetting the details as I just avoid the whole business for a few months. I never would have expected such issues to make it into production EPROMs. What do you have to change to fix the bugs?
|
|