|
Post by mirkosoft on Feb 10, 2015 8:35:07 GMT
Hi!
Yesterday I got CRT monitor Hansol with VGA input. I was very anxious and had expectations to test of my lightpen, later lightgun. I connected C128 VIC output by Composite to VGA adapter via HDMI converter and at the end via HDMI to VGA adapter... Tried my VIC lightpen demo - nothing working. Then I connected VDC output by CGA to VGA adapter. Tried VICE tests disk with lightpen/gun demo - rows were balancing around correct values, but incorrect, columns were not so correct. Then I tried my VIC lightgun demo and using lightpen didn't shoot target, so I was frustrated and lightgun tested not.
Here's Q:
Is possible to use lightpen or ligtgun only with original for Commodore 64/128 made CRT monitors? Or where I do mistake? Too high amount of signal conversions?
Thank you for all ideas to solve it.
Miro
|
|
|
Post by gsteemso on Feb 10, 2015 18:52:27 GMT
Running the signal through a VGA converter loses the original timing information needed to synchronize the light sensor with the computer’s notion of where the CRT is currently painting. The VGA signal draws the whole screen roughly twice in the time the C128 draws it once, so the C128 is going to get pulses from what it thinks is several places on the screen until your finger comes off the trigger a few screen repaints later.
|
|
|
Post by mirkosoft on Feb 10, 2015 20:02:01 GMT
OK, so VGA is two times faster if I understand correctly... Q is - is possible to do with anything anyway?
I'll write calibration program but will be applicable not to pixels, only to characters... I don't want lost this fight.
Miro
|
|
|
Post by gsteemso on Feb 12, 2015 3:26:50 GMT
Of course it’s not bloody possible to make it work anyway! The ratio of C128 screen repaints to VGA screen repaints is not only not exact, it also varies enormously depending on the VGA signal parameters (resolution and refresh rate). On top of that, the light pen pulses are coming in at least twice as fast as the 128 expects them to, and in no apparent relation from one pulse to the next. I’d honestly be shocked if the VIC-IIe even registered a value for it, because there’s no consistency to it at all.
|
|
|
Post by hydrophilic on Feb 19, 2015 8:47:57 GMT
Yeah, what gsteemso said!
The light pen/gun only works with *precise* timing of *standard* video display... if you insert "converters" like CVBS/RGBI _to_ Composite/VGA, then there will be a VERY SERIOUS timing delay... it may look good to a human, but the "micro-second time" will be VERY wrong for the computer...
So "standard" software will not work if you convert (for example) an "analog" VIC-II image to VGA... (also will fail with VDC to VGA conversion).
Please note... I am not saying it is IMPOSSIBLE... but it is very NON-STANDARD! You must know *exact* delay of the user's RGBI/CVBS conversion to RGBI/VGA...
Sorry if that is not a great answer... another way to think is, test with specific hardware (like RGBI -> VGA with GBS-8220)... If you are a true ML hacker, you can make that *specific* hardware work... But if a user has different hardware, it will often fail...
I hate to be "Mr. Negative", but it seems like you might need to write many different "software drivers" to adapt to the user's hardware!
If you are a REALLY cool programmer, you will allow the USER to set parameters for their specific hardware... sadly, I don't know how you can possibly make it *AUTOMATIC*... (I think *that* would require a feedback-loop which could only be made by serious HARDWARE hackers [impossible for the casual user]).
Sorry if I confused everyone!!!
|
|
|
Post by mirkosoft on Feb 19, 2015 17:45:03 GMT
So, finally, I see that I must to look at any Commodore monitor...
Miro
|
|
|
Post by hydrophilic on Feb 22, 2015 2:48:46 GMT
Did you do any tests? Like a simple DO/LOOP in BASIC to PRINT values of PEN? By analyzing the data received (with a known input, like top-left pixel) then a pattern should emerge... if you figure out the pattern, you can make it work! But if you get just a bunch of "random" numbers, then it would be impossible...
|
|
|
Post by gsteemso on Feb 23, 2015 1:01:57 GMT
Your talk of somehow adapting to the signal had me scratching my head. I didn’t think it was physically possible until I stepped through the logic, and even if you did acquire the user’s help to calibrate your software, it would NOT be straightforward. To show exactly why (and what you would have to overcome), let us step through the exact timeline of events. The following is equally applicable to VIC-IIe (no matter whether PAL, NTSC, SECAM, they’re all pretty similar if you ignore the colour encoding) and to VDC output, as the main difference between them from a timing perspective is that the VDC crams a faster-changing signal (more pixels) onto each scanline.
(1) The C128 begins to draw the first field of the frame. Standard video signals, which is what the C128 produces on both monitor outputs, always come in two fields, each consisting of every second scanline from the complete frame. Whether the even lines or the odd lines are drawn first depends on the video standard, and doesn’t really matter for our purposes here. (Unless you’ve gone to some effort to set up an interlaced display, both fields are exactly the same anyway.) The C128 begins the frame with vertical synchronization pulses prior to the start of the visible signal, which the VGA converter catches, causing it to begin buffering a new frame. It’s most likely going to still be painting the second-to-last previously buffered frame for a random amount of time, varying between roughly zero to 1/30 of a second (0–2 “ticks” in Commodore jargon, or “thirds” to more traditional horology).
(2) As the first field is partly painted, let’s call it about 3/4 done, the user (who started pressing the light pen trigger a few frames ago) finally closes the trigger switch enough to connect the sensor in the pen to the trigger input on the video chip. However, the electron beam is being directed by the VGA converter, which at this point is finally drawing the first part of the last previously buffered frame.
(3) The C128 finishes drawing the first field, puts out some more vertical sync pulses, and starts to paint the second field. The VGA converter is now filling in the missing every second line in its idea of what the current frame looks like, and since that’ll be going on for a bit yet, is probably getting ready to paint the last buffered frame for a second time. (Or, possibly, a third time. Depending on the refresh rate your VGA monitor has negotiated with the converter [generally somewhere between 54 and 85 Hz], some frames will most likely be repainted once more often than others, as the converter waits for the current frame to finish coming in at its TV-derived speed of around 25–30 Hz.) The actual currently-displayed frame could be at any stage of having been painted at this point, but going by how I have described this example so far, it’s probably about 60% of the way through painting the previously-buffered frame image.
(4) The electron beam in your monitor sweeps past the sensor in the light pen, which sends a pulse down the wire and, eventually, into the video chip, which latches the current scanline value into an output register for when your program comes looking. Light pens are not precision instruments, so this pulse is actually several pulses in close proximity, which the video chip has to adjust for. (The whole process takes a small but, to the computer, noticeable amount of time, meaning that even when you have a monitor connected directly to the C128, the value that gets latched isn’t quite the one where the electron beam actually was, so some software calibration is always needed with a light pen, no matter what.) Since the electron beam is not, in fact, controlled by the video chip like it expects, it will register a value near the top of the screen even though in this example the VGA converter is painting nearer the bottom.
(5) The VGA converter starts repainting the screen again, still using the last previously-buffered frame. The C128 has nearly finished filling in the current frame.
(6) The electron beam sweeps past the light pen again, causing another chain of trigger pulses and, eventually, a new raster number to be latched in the video controller. The user hasn’t moved the pen appreciably in the fraction of a second since the last pulse-train came in, but since the actual position of the VGA-converter-controlled electron beam at any given moment can’t be predicted at all from the video chip’s notion of what the current scanline is, your program has no way of knowing that. The scanline number recorded by the video chip is near the bottom of the screen.
**IMPORTANT**: Since the light pen trigger pulse-trains are coming in roughly 0.9–1.7 times as often as the C128 is expecting them (and the pulses are both a bit shorter and twice as numerous as anticipated), there is no guarantee that any specific video chip will be able to keep up. You’d have to try it and see.
At this point, you had better hope your program has been keeping a close eye on a CIA timer through all of this. Assuming the user really is holding the pen still, and you set up an interrupt routine to record (time, raster number) pairs with the least delay possible whenever the light pen trigger goes off, you now know the time between VGA repaints and thus the VGA refresh rate. That in turn will let you calculate the ratio of VGA frames to C128-native ones, which will (provided you’re still keeping a very close eye indeed on the CIA timer) allow you to fudge the recorded raster positions to approximate a correct value. Since this involves a large amount of multiplying and dividing and incredibly precise timing, I think the only way to get acceptable speed out of it is to use a bunch of lookup tables. You would likely have to calculate them from scratch every time you run through the “calibrate light pen” routine, too.
|
|
|
Post by hydrophilic on Feb 23, 2015 7:42:59 GMT
Yes, I know there are MANY technical limitations... mainly the different horizontal frequency (but also the buffering). So *in principle* I think it is unlikely ("impossible") to work... but I am also a pragmatic/experimentalist ("scientist"). If you do experiments with a particular hardware, a pattern *might* emerge... if so, you could write software for that PARTICULAR hardware configuration. But like I said before, if you get "random junk" from your experiment, then it seems it would not possible... and if even *IF* your write software that works, it would be limited to that specific hardware configuration... Maybe if the user could configure values it could be more general (more hardware options might work), but it would rely on user calibration (if it works at all!). In the general case (any hardware) it does not seem possible (so I think we agree on that). In summary, unlikely to work... but you won't know unless you try!
|
|
|
Post by Pyrofer on Jul 4, 2015 20:49:20 GMT
I discussed this with others recently and we decided that you could FAKE a lightpen/gun signal if you knew where you wanted it to appear to be. A modern micro controller could watch only the H/V (or composite) sync signal and then use interrupt timing to trigger the lightpen signal at a delay appropriate for where you wanted it to pretend it was on the screen. The issue comes with deciding where it should be. You could use the Wii method and have an IR camera pointed at IR leds on the screen, translate that into a position and then send that as a lightpen signal or you could use a touch screen and report the position, this would work best for lightpen stuff. It all requires a micro controller and some code however.
|
|