WCME issue
Since WCME has been updated, all seems well except that renewal of the certificate for the main site (this one) fails day after day. Reason might be that https: is not yet enabled, or that things have not been set up yet. Or not properly. However, this has a somewhat lower priority – but needs to be addressed…

By the way: I _think_ I found a reason for losing connection now and then: There is a line in the WASD mapping that disallows access for processes that have over 10 connections. That is valid – but should not be applied to my own address – the server may use that for it’s won processing…: I couldn’t connect to the normal site, got “too many connections” in the WATCH output, and the server returned the (intended) 503 error….
Changed that to be applicable for any address – except my own. It seems to influence the blog performance as well.

Second, I started working on the procedures and programs to scan the log files and relate the data in terms of time and source. For file transfer and web the logs of these services have the data in them, but for mail, I have to add router data in order to find out who is trying to pass mail – which will be discarded by the spam filter – but the signalling if incoming mail does not show the originator – I need both PMAS and router logs to find the culprits.


Software port blog
Read on the OpenVMS ITRC: John Apps and Brett Cameron took a task on their shoulders in porting an OpenSource messaging suite to OpenVMS. It’s named (Open)AMQP and they will keep you informed on the blog on this project.
It’s an interesting piece of work, given the pricing op BEA MQ (the former Digital Message Queue) and IBM’s Websphere MQ. It would be a nice addition to the OpenVMS portfolio – but alas, I’m lacking time to get into it at this moment. But if you are, this blog also contains links where to get the documentation and kits.


First page shows
By adding log lines in the OpenReport routine showed the mistake causing the access violation. The routine should be called with the same context – actually the address where the initiated report is stored. Next problem turned out to be a wrong name: “PAGETITLE” was not found in the report. No wonder, I specified the wrong one. In changing the configuration file to contain the right report – behold: The pages shows up!
It’s only that no data is shown, but that seems to be rather normal if the datafile is empty. It’s also possible that getting the index record fails as well.
So next is to add data into these files – using a DCL procedure, to begin with. Perhaps I’ll write a tiny, Q’nD image that is as straight forward as possible. Just a simple input routine, and simple storage. Well, the routines do exist and can be used. In that case, I can (should) use Stephen’s recommendations.

On second thought: that might be the right way to go.

MySQL once more
Any time when an update comes along after some time (MySQL has been running for 2 weeks with just a few updates), the big one yesterday was stored – and MySQL_SERVER thought it time to crash again. I’m getting rather tired of this problem, and will update some time soon – there is not too many room at the moment.

Someones is porting SQLite to VMS – something I abandoned a few months ago. Would I have looked deeper into the issue, I might have foun there is an os_vms.c file already available. Old, but very usable. It seems the big locking problem has been solved by the latest CRTL library routines – flck seems to be available on OpenVMS 8.3, and working. Jean-Francois Pieronne – Mr MySQL/Python/MoinMoin – did some re-code for some routines and theer shouldn’t be any problem now. How to integrate this with WordPress is probably not that easy – there will, no doubt, be a PHP extension but I don’t know if it can easily be ported. A Python one exists, and works.
But I can use it in my program. It’s FAST. It’s TINY. Just what I need on my small box.

This is a Python framework for creating, well, articles and newspapers. Blogs, in other words. Google seems to use it as well as the framework for their applications.
Worth to try!

As usually, there is too much to be done in too little time 🙁


Creating a page: reading files
I found some time to get on with the web-page program. I remembered everything compiled, and I kind of recalled that there was something wrong in file specification in the [[HOME]] section, but that was about it.
Again, WATCH is the ultimate tool.
Accessing the data didn’t work, I got non-conformance messages all the time and watching the server work, it immediately can out what was wrong: The currently active version was built /DEBUG. The remedy is easy: rebuild the image withiout debug, and don’t forget to copy the result to CGI_BIN: ! (It took some attempts to remember that…)
Having that setup, where do the logfiles go? Although I had set the proper logical, and deleted the old files, no new logfiels were created. Returning memory fragments lead me to other places and have protection set so the server could indeed WRITE to the proper directory – a requirement to create files. It had worked before but when I installed the new webserver version, protections were reset. This one as well, so NO write access anymore…)

After an hour, I was at the point I could start finding out whether it works. Of course, id didn’t.

One of the things to be done before actually reading the data files, some converion needs to be done with date: A standard VMS-formatted datetime, like “23-apr-2008 21:25:22.012”, is not usable for a descending key – you’ll need this date converted to a sting like “20080423212522012”. Of course, I have a (FORTRAN) library routine, but there were some problems in there that needed to be solved. Finding out why that conversion failed took some time. The first was a miscalculation, which was simply solved, The other turned out to be testing for errors on an internal WRITE (integer*4 datum into a character*11 string ) returned a IOSTAT of 1 – non-zero so error – so the error branch was taken – but the converted data was indeed correct. The short cut is simply forget about status and use it reguardless the outcome. For now, that is. A final solution must be found later, but “1” seems to be valid in this case, though not zero. (ERRSNS gave some RMS error codes and none seems feasable – there is no RMS involved)

Once that was settled, opening the files failed: %RMS-F-DEV, error in device name or inappropriate device type for operation. The reason is quite obvious: using the right logicals is crucial. The program runs in the server’s scripting account and therefore the logicals should be defined in either /GROUP (for that user) or /SYSTEM. The latter is the easy one 😉
Still, the files were not opened, although the right name was specified. The debug log didn’t show why, and examioning the code revealed the one simple reason: the routine to initiate data access used fixed specifications in stead of the data stored in memory. It means an adjustment to the interface, passing the names of index and data files. These additional argumnets, used in OPEN statements, meant the files are accessable and can now be read. No errors found here, but it remains to be seen if data is actually passed.
I was anticipating that the read data could be seen once the data was stored in the report and outputted – these routines are used with the image cureently creating the homepage, but here I ran into another problem: an Access violation. There will be something wrong in the interface. The curent program is a FORTRAN one, and the new program is Pascal, and there may be an error in the PASCAL definition of the routine.

Now time is up – so solving that will have to wait. Just a short time, I hope


Procedures available for download
Controlling the system and access is something to be done regularly. Two of these tasks are scanning the web access log for deliberate abuse attempts (weekly) and the mail statistics (monthly).
Checking the weblogs means scanning hundreds of lines, and the number of actual attempt is rather low, and getting the mail statistics is just a matter of counting records in the logs. Both very boring activities the system can do faster and with less errors than any human can.
So I wrote DCL procedures to do it: WeblogScan.com for scanning the WASD logfiles, and pmas.com for counting mail statistics from the PMAS logfiles. You cannot user them without chnaging some information, but what to adapt has been noted well in the scripts.
Another handy procedure – that requires some localilty settings adapted as well – is my MYSQL watchdog. It allows MySQL to be down for a maximum of 15 minutes after a crash.

All files are free, you can use them to make your life a bit easier.

Don’t rely on them blindly. They work for me and I’m satisfied with them. That they work for me doesn’t automatically mean they work for you. No guarantee, no support (well, if you knwo a way to enhance them, please supply feedback so everyone can benefit of your experience).

You may pass them on but please leave the header intact.


Webpage program getting on
Tonight I took some time to finish the first stage: All modules – though most are simply RETURN statements, because they will be coded later – do now compile and the programs links. But I cannot continue at this moment for one, stupid reason: Last update cycle installed SYS7-update and that breaks DEBUG: I cannot run an image built /DEBUG….This was first mentioned on ITRC, and the offending patch has been superseeded by SYS8; I got that already, but had no time to install it.
A second thing to be done than was an update of the home page program. I don’t want to update texts in the image itself, because it requires the program to be recompiled, linked, tested and copied. In stead, it’s easier to create a text file and have it read by the program.
It was rather simple, a very primitive (noy yet perfect) scheme: create a file “dd-mmm-yyyy_A_Title_You_want_to_be_displayed”, fill it with the text you want (including hyperlinks, and other HTML stuff), define a logical to refer this file (/SYSTEM) and that’s it: The program reads teh logical (and fails if it doesn’t exist – testing IS is a requirement), extract date and tile, replacing all underscores by spaces, and read all lines into an array, and output it using the reporting tools.

Biggest trouble was reading a file, created by EVE. But I though too much “error aware”. It wasn’t so troublesome after all.

The result is the current page – including the extension “.txt” in the header – I already said it isn’t perfect 🙂 – finished after midnight.


Next phase in program
I started the next phase in the program to more or less replace the front page and, potentially, the PHP code: processing the request. It requires evaluation of the URL – or better: the part after the servername, and after that, the proper action should be taken depending on this evaluation. The assumption is that the home page – which can be a blog, by the way, is just the service, like “www.grootersnet.nl” – with, or without a trailing slash. “commands” are passed in a form as ?(action), as deplaying an index: “www.grootersnet.nl?index“, or “www.grootersnet.nl?i=-1” – or: give me the previous index page. I haven’t decided yet for the exact syntax. It could be just the first letter; i for index, p for page and r for recent, and on what other parameters.
Time will tell.
The basic framework is now done – it’s now a matter of parsing the command and do what’s needed. I think the easies part now is creating routines to retrieve the requested data from data storage, depending on the data type: Create an index page, or get the most recent post, or a given page -by number, to statrt with (I could think of retrieving a page by its date or title. A small portion has already been foreseen, so it wouldn’t be much trouble. Once these routines are created (and tested), processing of the data is fairly straightforward.


No real issues – only an additional web
All seems well, on all systems. There are a few minor tweaks to take care of, but these are of class “somewhat annoying” and I can live well with them: it looks like the spam-list isn’t updated (I got a few messages slip through the mazes of PMAS that were discarded by SMTP, show up in the operator log but are missing in the list…)

The progress on the webpage program is more important at the moment.

The big thing, simply required to get on with the program, is running it as the main “page” – without reference to it’s actual location. No problem to map it, I thought, so I created a new service in WASD, to be usable from the local network only, and located on a separate port, so it wouldn’t interfere with all the other webs. I will run the program directly from the location that the building procedure will locate it on, but give it another name using a logical:


and map it all in HTTPD$MAP.CONF

map / /txtexe/wg_webpagedev map=once
exec /tstexe/* /tstexe/*
path /* /*testexe/*

like I have setup my mainpage (that uses “/cgi-bin/” under the hood)

It didn’t work: mapping was allright but once the ‘script’ was to be accessed, /tstexe/ was mentioned to be a bad name.

Finally, I took a look at the directory the home-exe runs under: cgi_bin. This proved to be a searchlist, to start with, but, like the blogs, both links are actully concealed devices!

After I defined TSTEXE as concealed device (there is no need (yet) for a searchlist) :


and now it worked. I didn’t get the expected results (actually, I got the nasty “Not a valid CGI-response” error due to program errors not yet handled) but that was solved by creating and modifying the configuration file.

Expanding the code
The code has been expanded to show what’s read from the system (the symbols set up by tyhe webserver) and from the configuration, but just limited to the basic data. I added some more things to be displayed and it turned out that it wasn’t doing what it should: some data wasn’t accessed in the right place, or not at all. Some addition was made to the configuration and therefore the code, to get all data in place.
Once that proved to be right, the next phase is to load the report definitions, using the printLibrary routines (made years ago on VAX, and only recently ported to Alpha). Originally, the OpenReport function would do this, and prepare output as well. However, this is not feasable in this situation where output isn’t to be opened at all – yet – so I had to add another routine in the (FORTRAN) code of the print library routines: InitReport – actually the same as OpenReport but without accessing output. Another advantage is that here I can now specify the library – where OpenReport relies on logical PRINTLIB (this hasn’t changed). This new routine is now called to initialize the pages. So far, it looks good, in both html-output as plain ASCII.
It also meant I had to review the environment code I created before to access these routines. Some required attributes were still missing, and arguments that change, needed to be specified to be VAR.

Now I can start getting data, but first I should determine WAHT to get if no arguments are specified. It means another addition to the configuration file and structures (and code). It’s just more of the same and no problem at all.


Bit by bit, the program for dynamic content continues. Small steps at a time since it’s just a few hours on some days, and none on others.
It required slight updates for already present library routines, though the vast majority just does what it should do. I only found a slight mistake in one of the routines interfacing to Mark Daniel’s CGILIB, and that caused a problem getting data from the server. This has now been solved, and at this moment, the retrieval of required server symbols works, as well as the basis configuration for both home and blog-definitions.
Next will be the initialisation of all reports. The basics are alrteady in use in the current homepage, actually a derivate of the testprogram for all reporting routines, where the texts are hardcoded. But all of the html code is saved as a report in the text library. There is no HTML-generating code in this program!
And that’s what I want it to be like.

Some thoughts are still maturing – not at all fit for implematation, but to give an idea:

  • Specify the order of the blocks to be used in generating the page
  • Write pages to memory and server from there (kind of cache). Most feasable for index pages, but it can also be used for normal pages. Perhaps multiple caches: one for indexes and and for pages
  • Make it a threaded application
  • Multiple handlers – which will require a global section where all data is kept: initialisation data, reports, caches…
  • make it a RunTimeEnvironment
  • Not necessaraly in that order. Give it time, it may, or may not, appear in a future release.

    (I noticed I have to adjust the specification of LIST items here – but that can only be done on the server itself, I have no access to the files via the web – for obvious reasons of security. In the mean time, I did some <font> specification, to make at least the text more readable)