MariaDB active
Tonight I started MariaDB and had it ‘active’ for at least 15 minutes – longer than the previous version could stay alive. But nothing happened, so I did some SQL work, that would cause the server to die before. But it stayed on.
So I changed access for the Trips, Tracks and Travels blog to use MariaDB – there was no change in that blog after I moved it to MariaDB. had to do some updates but they al seemed Ok. (Changed the Admin password, for a change :)).

For now, it looks fine; Time will tell, but if everything is going as it should, this blog will be the next – which will require moving the whole MySQL database again, but now I know that the process works, it should not be a problem. Except that the blogs will be unavailable for an hour or so.

Stay tuned….


Apart from what happened last month, there is not much to look at. PMAS does its work:
PMAS statistics for February
Total messages    :   2534 = 100.0 o/o
DNS Blacklisted   :      0 =    .0 o/o (Files:  0)
Relay attempts    :     73 =   2.8 o/o (Files: 28)
Accepted by PMAS  :   2461 =  97.1 o/o (Files: 28)
  Handled by explicit rule
         Rejected :   1682 =  68.3 o/o (processed),  66.3 o/o (all)
         Accepted :    200 =   8.1 o/o (processed),   7.8 o/o (all)
  Handled by content
        Discarded :    278 =  11.2 o/o (processed),  10.9 o/o (all)
     Quarantained :    264 =  10.7 o/o (processed),  10.4 o/o (all)
        Delivered :     37 =   1.5 o/o (processed),   1.4 o/o (all)

Just the number of rejected messages is larger than normal, I’ve seen them coming in in the last week only; 26-Feb-2017 being most: Looking at the size of operator.log, it was over 1200 blocks on that day; the others that week were over 400, except for one, where less than 100 should be normal. The number of relay attempts was little; only on 08-Feb the attempts resulted in a log that was over 4 blocks in size, and mainly from one source – trying to relay. It could have been a test, sending from address as “antispam@nmap.scanme.org”, “antispam@diana.intra.grootersnet.nl” or “antispam@[]” to a number of addresses all related to the same organization. I checked this address and it is listed in a number of blacklists, so it might as well be an attempt to see if there is a way to abuse the mailserver. But that checks the IP address of the sending server and if it not from my own address, and the recipient is outside my domain, it will fail.
As simple as that.

I had some trouble installing OpenVMS on the Itanium servers – one at least. It would find the DVD, start reading it but then halt. First, I thought it could have been a bad DVD, so I tried to read it on my old PWS, and I could access the files (not run the executables because these at I64 images, not AXP…). The DVD wasn’t as bad as I thought it was….
So first I could try to make it accessible over the network: meaning I would have to start Infoserver on the PWS (which should be possible since that runs VMS 8.4). At last, found the documentation on the Internet to set it up, but the docs were a bit limited but with the examples, I think I had it all right, but in the end, I couldn’t start Infoserver on that box – still have to find out what caused it to fail.
Second, I found out that I need to enable the iLO unit first – meaning I had to connect it the the network (it will be set up using DHCP to begin with), see what address was assigned to it – with dhcpdbdump, the MAC address is recognizable as from the Itanium server) and access MP on the iLO using telnet (it’s on the local network so why bother about security?). Now I could boot from internal DVD and installation did start and finish. The only thing yet to do is to install the licenses and do the TCPIP configuration (and give the iLO its own. fixed address (and name)). BTW: The first Itanium is now named Iris.

No news on the PHP front
There are a few things to consider.
First, in the WASD global configuration I have a line:



causing PHPWASD to run whenever there is an extension .PHP found. I’m pretty sure it will run the executable the logical PHPWASD refers to, because otherwise there would have a version conflict with PHPSHR.EXE: Each version I have on the system resides in its own directory, referred to by logical PHP_ROOT:. This is the item that is used when I run PHP_INFO.PHP, there is no other reference elsewhere to this script; And it shows all the correct data – with the correct version at that time.

However, in the mapping, the blogs have their own reference:

redirect /sysblog /sysblog/index.php
map /sysblog**/ /sysblog*/index.php
exec+ /sysblog/**.php* (phpwasd:)/sysblog/*.php* ods=5
pass /sysblog/000000/* /sysblog/* ods=5 search=none dir=noaccess
pass /sysblog/* /sysblog/* ods=5 search=none dir=noaccess

which runs PHPWASD (the logical – explicitly via the logical) by it’s own rule.
Now what happens if BOTH start running? It might mean that the one that I expect to run (the one in the mapping) will run into an I/o error, or it might not be able to access a sourcefile; or it may finbd it’s access to the database is blocked, causing a timeout in database access (as is shown in the PUP error log file).

Another possibility that has crossed my mind: In the past I noted that PHP 5.3 and up will run a larger number of sub-processes. It might be that I have run out of process slots, or that the parsing, translating and execution of PHP code takes a far larger amount of system resources, causing the script not finishing in time (that is noted too, so I doubled the maximum execution time), but it might be the time to finish database transactions may have to be increased too. Another (and perhaps, better) solution is to increase the working set of the PHPWASD: processes: It hasn’t been changed since I started running PHP on a 256Mb system….Now I’ve seen that the peak working set can be increased – given the number of pagefaults…

So these are paths to consider. I didn’t get an answer of Mark Berryman yet.


WordPress 3.5 test – a final updare
In the last two weeks, I’ve been busy with testing the new version of WordPress (at least, verison 3.5) to find out how to set things up in the WASD configuration, and it’s pretty close to be finalized. Two issues have been found – apart from problems within WordPress itself: The tests revealed an error in the PHP port, that could have caused the problem in cleanibg up the PHP environment after status has been returned to the server (a times that isn’t cleared when it should) but that is easily circumvented by specifying PHP.INI parameter max_execution_time to zero – meaning”indefentely”. In this case, it’s not a problem since WordPress won’t run into that situation :).
The other problem is to be solved by HP.
PHPWASD calls c-function chdir to change the ‘current directory’ for the running image, and next calls c-function setenv to change environment variable PATH to the same value. chdir succeeds – but setenv fails when the new value is longer than the last issued, or the default value that is passed when it wasn’t set before (being the current default). In such a case, PATH is set to NULL, the advantage being that from that moment on, any value is accepted, no matter what size it has, compared to the previously specified value.
There is a simple workaround: just re-specify the new value one more time. If all is well, it will than be accepted. The right solution if for HP to solve the issue. I created a reproducer and sent it to OpenVMS engineering.
(The full communication on the WordPress issue can be found on the WASD mailing list for 2013, thread on “WordPress: Config? UPDATE”. When applicable, some data may be downloadable from this site – links are included in the messages. The message to HP – including the reproducer, can be found on the same list – search for “setenv issue”).
Updates to be applied
This means I can now start thinking of an update of the blogs. Besides this one, I’m also thinking of updating Diana to VMS 8.4. One thing that may have caused the problem with PHP 5.3 on this system may be caused by a version difference in CRTL – or even VMS: if PHP5.3 is linked on 8.4 and run as-is on 8.3, it may have lead to the End-Of-Files issues I encountered before.


Update on WordPress upgrade
Finally, the new version of WordPress – that is: 3.5) runs on Daphne, under WASD 10.2 and PHP 5.3.14, in a basic theme, but it required a very different mapping than I used originally:

redirect /wptest /wptest/index.php
map /wptest**/ /wptest*/index.php
exec+ /wptest/**.php* (cgi-bin:[000000]phpwasd.exe)/wptest/*.php* ods=5
pass /wptest/* /wptest/* ods=5 search=none dir=noaccess

There might be some additions, thanks to UMA, but this setting does the (most basic) job.
Nevertheless, there are still a few things to take care of – but since I don’t control the PHP-port, PHPWASD or C-RTL, the few things that still don’t work as expected, or wanted, cannot be resolved immediately.
Mark Berryman identified an error in PHP but there is a workaround: setting max_execution_time in PHP.INI to zero will probably solve some timer-related issues – as the sort-of keep-alive of PHPWASD.EXE and the PHP environment. He will solve this issue, so there is some more testing required.
Not that this workaround solved all the issues – when WordPress admin page is started, it may take a while before they show up – and the number of open channels keep going up, to 150 or so. But later on, it’s MUCH faster, and requires less channels, even when PHPWASD is restarted from scratch. This has to do with paging, I guess, because Daphne is small – compared to the systems that Mark Berryman and Mark Daniel appear to be using.
More problematic is a dysfunctional C function.
As Mark Daniel has explained, PHPWASD.EXE uses the chdir function to change the location where the images should run – in the Unix-way of thinking – in practice, the directory specified in the URL that starts the PHP code. So http://<site>/wptest/index.php will cause a chdir to /wptest, and http://<site>/wptest/wp-admin/index.php will chdir to /wptest/wp-admin. That is working fine. But next, PHPWASD.EXE calls function setenv to set environment variable PATH to the same value. In itself, that’s fine also – but this fails when the second assignment uses a string longer than the preivious one. This happens within WordPress: On the first URL (http://<site>/wptest), nothing is wrong chdir ("/wptest") is executed as well as setenv("PATH","/wptest"), but after the login sequence, when http://<site>/wptest/wp-admin/index.php is to be excuted (which is reflected in the address bar of the browser), chdir ("/sptest/wp-admin") succeeds, but setenv ("PATH","/wptest/wp-admin") fails and PHPWASD exits after returning a 502 status to the server, causing a non-cgi-conform response….
When you next start the same URL, all is well, and the pages show up nicely. However, you may encounter similar problems on executing other PHP code that re-executes setenv.

After Mark Daniel had specified a test program for chdir, getenv and setenv, I adapted this program to do some testing. And my assumption that size maters, becomes clear: the length of the last assignment determines the maximum size of the next string! If longer, the status of setenv is -1 – stating an error, and the environment variable is set to null:

$ run test3
1 - Longest (/wptest/wp-includes)
getenv PATH == sys$sysroot:[sysmgr]
setenv PATH == 0
getenv PATH == /wptest/wp-includes
chdir PATH == 0
getenv PATH == /wptest/wp-includes
2 - Midsize (/wptest/wp-admin)
setenv PATH == 0
getenv PATH == /wptest/wp-admin
chdir PATH == 0
getenv PATH == /wptest/wp-admin
3 - shortest (/wptest)
setenv PATH == 0
getenv PATH == /wptest
chdir PATH == 0
getenv PATH == /wptest
4 - Midsize again (/wptest/wp-admin)
setenv PATH == -1
getenv PATH == (null)
chdir PATH == -1
getenv PATH == (null)

I would call this a bug. It might have been introduced in an UPDATE on VMS 8.3 and 8.4, Jeremy Begg has located that there has been an update on this function to prevent a memory leak. Quit possible that the way it has been solved causes this erroneous behavior.
Alas, there is no way to get around this, except that PHPWASD should NOT change PATH this way. So I would need the PHPWASD-code for PHP 5.3.14 (The version for PHP 5.2 has another interface to PHP which is not compatible…).

But for the normal user – who reads the blogs – there seems to be no a problem. So the next step is to try the newest release of WordPress (3.5.1) and the themes I would like to use, and see if it all works fine.

Finally, there may be another thing to consider, and this may have caused the end-of-file problems I encountered on Diana with PHP 5.3: If the code is built on VMS 8.4, it may cause problems when executed on VS 8.3. It shouldn’t, but because CRTL is involved, it could cause problems.

All information on the issue can be found on the WASD mailing list, look for “wordpress: CONFIG?” in the 2012 and 2013 directories, and the test programs I used are available here. Note this is a VMS-created ZIPfile so you’ll have to move the file to VMS (binary!) and unzip it there….(The other files in this directory are Windows-based ZIPs containing NOTAPAD files.)


Locating the right layout
Now I have VWCMS running, it’s time to locate the right format for my homepage. Of course, I could fall back onto the current one, but it would be nice if there is a new look and feel on the site 🙂
So I searched for free website templates and I came along Wix.com. They offer a nice tool to create your site – but it has to run on their servers, and to link the domain to the site, it’s no longer free….
Second sources is freewebsitetemplates.com that offer a number of ‘free’ templates to be created on…wix.com.
Nevertehless, they offer downloadable, CSS-based templates that I can now install to have a look. Since this installation is rather standard, it is possible to automate the basics, and si I now can just run a procedure to do just that:

Create a directory tree under VWSCM_ROOT – if needed. Some packages have the directory spec in the ZIP-file, others don’t. By examining the location of index.html in the zip file; if there is no directory in front, it must be created and it must be mentioned as target directory. Otherwise, UNZIP will create the tree, so the target is VWCMS:[000000]
Next, locate two files form VWCMS_ROOT:[STARTER} to that directory.
Now the site must be added to the configuration:
Add new lines to VMSCMS_CONFIG: and add them to WASD_CONFIG_MAP.

It meant a small change in the WASD mapping configuration:


pass /tinymce/* /tinymce/*

[includefile] wasd_config:vwcms_test.conf

# Public web

and this file wasd_config:vwcms_test.conf is extended by the procedure, as well as the VWCMS configuration file, by open/append and writing the appropriate lines.

One other thing is needed: INDEX.HTML must be renamed to _SITE.HTML, and a line must be added:
<link href="_vwcms.css" rel="stylesheet" type="text/css" />
must be added directly after
<link href="style.css" rel="stylesheet" type="text/css" />

I did this by copying each line of INDEX.HTML to _SITE.HTML, and add this extra line if the line read contains “HREF=STYLE.CSS” (I uppercased the content for the equation).
In most cases, this worked, but in some, the line read spans multiple lines in the view, so you need a small edit of _SITE.HTML, but that shows immediately..

LAst but not least: Change protection of all files to W:RE, and indeed, in most cases the templates can be visisted.

This makes it easy to adapt the template to fit my needs.


IP forensics
The firewall in the router on the edge of the network has the ability to block 192 objects: single addresses, address ranges, or complete networks. I use this to block a number of single addresses and one or two ranges, based on the attempts to break into the FTP or web server. Since these attempts fail because of the use of wrong username or password, these are logged in TCPIP$FTP_RUN.LOG, the WASD access logs or in the accounting file. All mention the IP address from where the access originates, so it is easy to block them: Add the address as an object, add it to a group and you’re done.
Good side-effects are that these can neiter abuse the web- or mail server anymore: It limits the amount of spam messages as well, and the number of break-in atempts on the webserver are less frequent. However, non-evil access is also blocked – but I don’t care for that: Who intentionally and repeatedly misbehaves should not be surprised to be blocked for non-evil access as well.
These additions have to be done by hand, it would be nice to automate that, preferably on a daily basis :).
So I’m now working on a set of procedures to extract data from a number of files.
Primary log is the log file of the firewall: ALL access is logged in that file, and because the port is shown, it is possible to mention what type of access it is: is it mail, ftp or web access? But it doesn’t mention the intention: is it an acceptable access, or abusive? The amount of consequetive accesses from one address could be a hint for the second, especially where it concerns file transfer or mail, but for web access, it can be very valid. A real determination can only be achieved by examining the log files for the application that handles the protocol: the PMAS logfiles for incoming mail, access logs for the web or the logfiles. Other resources are the operator log and the accounting file.
Once the output files exists, these can be concatenated and the output sorted, so I can correlate the data in the protocol-based output against the router log – which is a challenge by itself. Hopefully I have all data at hand. Luckily, all protocols are handled on the same (VMS) system, and the router get its time from there as well. It’s good to have one timebase!
The only possible drawback is that access of blocked addresses or networks is not logged by the router, so once a site has been blocked for a period, it will show up as ‘non-evil’ the next time since there has been no noted access. At least: I didn’t see such a message in the router logfile.

Today I finished procedures that convert the log files of the router, the spam filter and the webservice. They all have the same record layout. But there is one difference: where PMAS store the full date and time – including hundreds of a second – the router and web access logs show time in seconds; the router doesn’t even show the year; The latter has been handled – quite primitively, but it works. However, as ASCII sort will not produce the right order so I will have think of a way to get around that issue.

Next stop is examining accounting.dat, on LOGFAIL records (giving me output for FTP access, for instance), and the operator.log file for mail that passed the spam filter; the basics are there already, I just need to reformat the output so that it fits the other files.
I can also scan the FTP-server log file – but that is opened for write so I’m not certain I can access that file immediately. To be kept in mind…

Once that is done, the data can be loaded into a database of some sort, and it will be simple to retriever the top-100 abusers and their IP addresses: these to be loaded as objects to be blocked, and that’s it!
Or use them otherwise 🙂


Work at hand
Apart from the PHP issues, there are a few other things under construction: A new homepage, and a suite to process network-related logfiles.
For the new homepage I plan to use Mark Daniel’s VmsWasdContentManagementSystem – a native VMS executable that can handle this type of posts – even blogging is an option (perhaps, any blog on this site may be redesigned using this package). I had the beta installed, so I removed it to prevent problems that coud arise; downloaded the latest version, (both the sources and the AXP objects), built and installed it. It does require some configuration, and mapping in WASD, and to get famliar with it (and because of the recommendation) I set up the example as in the documentation. But either I don’t understand or mis-interpret the docs, or these are inconclusive (incomplete of plein wrong – I cannot tell), I ended up with a message:
ERROR 403 -  reported by VWcms
Site directory not configured!

To be investigated….
Network logging
It’s an idea for quite some time: Scan all incoming network access, find out who’s attempting to hack, or abuse the systems, and shut the door for these people.
I started today with a program to scan the SYSLOGD logfiles on Diana: the firewall on the dge of the domain logs all access in this file, and when it is over 25.000 blocks in size, it’s cycled, and all cyccled files are stored in a zip file during the monthly maintenance process. Other files to process are the PMAS and FTP logfiles, and the access logs of the webserver.
So I need a program to convert these files into data that can be stored and analyzed, and that is also capable of updating the firewall with the top-100 addresses; the Vigor is capable of storing 192 single addresses, address ranges or networks that can be denied access – at the gate.
I started with a DCL-procedure that splits the SYSLOGD output – either active or archived – into incoming and outgoing traffic; each of which is next split into protocol-specific files; so at that moment, I have all lines of logging for every protocol, either incoming or outgoing – in exactly the same, fixed format. Therfore, it’s very easy to extract the required data from these files: date and time of access, the source and destination address and port – and the protocol.
Since there is quite a number of archives to process, I also created a procedure to scan a directory for these files – put there by hand of by unzipping an archive – and have each file processed that way. I’ve taken a decision to mark each final output file by the date it is created, and once created (if not existing) it will be extended with each SYSLOGD file that is processed.
This works fine now – next is the extraction of the same data from the PMAS logfiles, but IIRC, that has been done already, I just have to look fro them; otherwise, it is not a lot of work to do the same for these files. The same applies to the web-server access logfiles: Create a procedure that can handle one, and I’m done (just add a wrapper that passes the filename of the file to be processed.).
And, of course, a program to store this data into a database, a program to analyze the data, and one to update the firewall accoringly.
A few days ago, I found out – by accident – that the PMAS license expires tomorrow. I sent a request for a new license to the address I know exsists for that type of message – but it bounced. Next, I sent it to the address of Hunter Goatley – who’s in charge of the hobbyist licenses – and that bounced as well. So I sent it to the support desk of Process Software, but since I have a free license, they couldn’t help me; in stead they passed another address – which bounced also, so I was advised to contact Hunter directly – which didn’t bounce for the next hour. So it is likely to arrive; hopefully Hunter is not on holiday, and the license arrives is time – or I’ll be buried under all the messages that PMAS is now blocking ro rejecting…Fingers crossed….


More on PHP update
Downloaded the latest PHP (5.3.14), in which all extensions are now included, but it has it’s own obstacles: WordPress 2.6.3 can no longer connect to the database; I ran into something similar when I tried to bypass the mysql extension in favour of the more modern myslqi one, where WP signaled it missed an extension…No message of that kind in this case, but a database connection cannot be established…
Not such a big deal – WP 3.4.1 is on the shelves to be rolled in – when PHP 5.3 works.
So I continued testing.
Not that it mattered much: I got the very same issue: missing end-tags at EOF, on the every first file that is ‘required’ in the code on index.php:
* Front to the WordPress application. This file doesn't do anything, but loads
* wp-blog-header.php which does and tells WordPress to load the theme.
* @package WordPress

* Tells WordPress to load the WordPress theme and output it.
* @var bool
define('WP_USE_THEMES', true);

/** Loads the WordPress Environment and Template */

wp-blog-header.php is indeed the file that PHP 3.4 complains about:

* Loads the WordPress environment and template.
* @package WordPress

if ( !isset($wp_did_header) ) {

$wp_did_header = true;

require_once( dirname(__FILE__) . '/wp-load.php' );


require_once( ABSPATH . WPINC . '/template-loader.php' );


Indeed: no end-tage here.

Since all files originate from Unix, the WP files are all in Stream-LF format. It may cause an issue with PHP 5.4 (where, I think, it should not), but since I noted other problems with this function elsewhere (wget…), Mark suggested to convert the file to variable format, since that would bypass the mmap function.
So I did, for this file. It could mean the next file (wp-load.php) would cause the problem with a missing end-tag. But alas. The converted file did no longer cause the error on this file, but neither did wp-load.php: the contents of wp-blog-header.php are not interpreted, just presented as normal output….
Not exactly what is to be expected.

We’re getting on….Mark will set up an Alpha 8.3 box and try for himself. (I guess he’s working on 8.4 on Alpha, or on an Itanium box?). I’ll try on my 8.4 box as well. Perhaps it is an 8.3 issue ….


New hardware
My (about 5 years old Pentium-4 HT) workstation dubbed Aphrodite will get another role: It will replace the machine in the living room. That one is Pentium-4 as well – without HT, and quite problematic at times.
I obtained a state-of-the-art new box: ASUS P9X79 motherboards with Intel i7 processor @ 3.6Mhz, and 8 Gb of memory (expandable to 64Gb). 8 channel audio, as on my previous system. Transferred Video and disks, but not the DVD-drives: These are ATAPI and the new system has (e)SATA only, to I had to obtain that as well.
Suitable for the heavy stuff I intend to run on the beast: Running multiple Alpha emulators side by side, and processing sound, image and, perhaps, video. I could use Linux on the box, but the emulators I can use do either not run on Linux, or not in a way that I intend to use them. For sound- and image processing, I already have Windows-based software I can work with pretty well, and I would need to learn these Linux equivalents as well. In the future I may add Linux as an alternate OS but for the moment, I stick to Win7pro-64.
Some trouble: The front USB bus has a different connector that doesn’t fit anywhere on the motherboard ans yes: I do need them; there is no COM exit, it needs to be added, and the motherboard seems to have a broken DIMM slot so 4 Gb of memory isn’t fitted in it’s preferred position.
These I’ll have to address with the supplier.
But I had to install the OS from scratch since Win7Pro-32 didn’t boot on this box. But installing the OS didn’t work out as good as I expected: I had to clear the whole disk – including the partitions containing data – because the BIOS of the new box couldn’t handle them: this is EFI based….
It was no problem to move them to the other disk – I thought – using a DOS box and XCOPY the contents to a directory on the other disk. But once that was done, I couldn’t find the directories I created. No big deal. Pity – but I do have a backup, and there hasn’t been much changes after that anyway. Of would it be a disk I didn’t expect?
After shutdown and moving the machine to its fibal locatioen, it turned out that a boot after shutdown did almost always fail, and I had to do a repair from the installation disk – which invariably failed because “… the system to be repaired is incompatible…” . But when offered to reboot normally, there seemed to be nothing wrong. Might been caused because I installed in safe mode?
So in the end, I re-initiated the disk, and installed everything again from scratch – but now when booting from the DVD-drive. From that moment on, it all went smoothly. Getting drives and software from in Internet – no problem.
But in accessing any of my own sites, there was. None responded, but services and servers were up and running….
New IP address
First thing to be done is pinging the server by name:
ping www.grootersnet.nl
translated the server to be – what I would expect because that’s what’s in DNS, being the outside address of the router.
But when accessing the router to seee what it says, the WAN connection is now on a different network, as well as the DNS-resolvers of the router. Contacted the ISP site (luckily there was no problem getting out!) and found there had been done some work that morning, and the connection had been down for a few minutes. So I called the help desk, and it was confirmed that the address had indeed changed. but there had been no information on this – which I would expect to be sent IN ADVANCE. Anyway, I had to contact the registrar of my domain to have the DNS references updated. That requires a signed document, which could be sent as an attachment in an email message.
Which I couldn’t use over the Internet….
But there are other addresses I could use: my provider’s, gmail, yahoo, hotmaill….So I created the letter, signed it, scanned it (using the new box – even with it’s problems) into a PDF file and mailed it. Next by the phone, it was handled within minutes, but it took some time before it would be expanded over the Internet.
This morning however, it still didn’t work: Although the new DNS-servers got them (the router configuration shows their addresses) the DNS servers inside the LAN didn’t. So I restarted BIND on the VMS box, and the router, but in some way or another it didn;t help. Looking into the LAN configuration of the router, I found the DNS-server in the LAN was the VMS box…Removed that: and now it’s all working again. But from elsewhere – mail, in particular – it may take some time: I didn’t mention that so that still refers to the old address..Will be changed today as well.
There has been one advantage: No spam either 🙂
Clean-up of DHCP and local DNS
Over the years,. systems have come and gone, and any new node in the LAN get’s an address by the DHCP server; and as long as the MAC address doesn’t change, that address will get the once supplied address. That will add these systems in the local DNS – and they’ll stay there.
But systems come and go, and the obsolete references are not deleted. So I took the possibility to remove all the old entries in both the DHCP and BIND databases.


wget issue
Doing some investigation for our sales department – doing some useful while being ‘in between jobs’ – I use Steven Schweda’s port of wget. This is a great tool to get the desired data – once you find (or create) a page you can use as a starting point, and find your way around in the possible options.

But accessing one particular site, the program crashes (with local and remote information obscured):

$ wget --output-file="/drive/dirpath/name.log" -
--recursive --level=3 --wait=1 --random-wait -
--directory-prefix="/drive/dirpath" -x -
--ignore-length --max-redirect=20 -
--adjust-extension --no-clobber -
%SYSTEM-F-ACCVIO, access violation, reason mask=00, virtual address=000000007BF06B80, PC=FFFFFFFF81074684, PS=0000001B

Improperly handled condition, image exit forced by last chance handler.
Signal arguments: Number = 0000000000000005
Name = 000000000000000C

Register dump:
R0 = 00000000000001EC R1 = 0000000000000000 R2 = 000000007BF06D60
R3 = 000000007BF6B4D0 R4 = 0000000000000000 R5 = 0000000000000000
R6 = 0000000000000000 R7 = 0000000000000000 R8 = 000000000000000A
R9 = 0000000000306D40 R10 = 0000000000000006 R11 = 0000000000316000
R12 = 0000000000000000 R13 = 0000000000000000 R14 = 0000000000306BD0
R15 = 0000000000000001 R16 = 0000000000000003 R17 = 000000007FF80000
R18 = 00000000FDF80000 R19 = FFFFFFFF81C08B48 R20 = 000000007FFF0000
R21 = 0000000000000002 R22 = 0000000000000000 R23 = FFFFFFFFFFFFFFFF
R24 = 0000000037DFD55E R25 = FFFFFFFF824F55C0 R26 = 0000000000000FD2
R27 = FFFFFFFF81C10210 R28 = 0000000000000000 R29 = 000000007ADDB390
SP = 000000007ADDB390 PC = FFFFFFFF81074684 PS = 100000000000001B
%SYSTEM-F-ACCVIO, access violation, reason mask=00, virtual address=000000007BF06200, PC=FFFFFFFF80086930, PS=0000001B

Improperly handled condition, image exit forced by last chance handler.
Signal arguments: Number = 0000000000000005
Name = 000000000000000C

Register dump:
R0 = FFFFFFFF80381930 R1 = 000000000000001B R2 = 0000000000000003
R3 = FFFFFFFF81CD2920 R4 = 0000000000000001 R5 = 000000007BF6B510
R6 = 000000001000000C R7 = 000000007FF87FC0 R8 = 000000000000000A
R9 = 0000000000306D40 R10 = 0000000000000006 R11 = 0000000000316000
R12 = 0000000000000000 R13 = FFFFFFFF81D4EE18 R14 = 0000000000306BD0
R15 = 0000000000000001 R16 = 000000007BF6B520 R17 = 000000007BF06200
R18 = 0000000000000005 R19 = FFFFFFFF80381430 R20 = 000000007FF87FA8
R21 = 000000007ADDAF8C R22 = 000000007BF06200 R23 = 000000007BF06200
R24 = 0000000000000001 R25 = 0000000000000001 R26 = FFFFFFFF80086A18
R27 = FFFFFFFF81C5CCD0 R28 = 0000000000000006 R29 = 000000007ADDAFD0
SP = 000000007ADDAF80 PC = FFFFFFFF80086930 PS = 000000000000001B

Running the program using the --debug option, the log suggests a buffer overflow

/drive/dirpath/site-dir/main.html: merge(`http://site/main', `http://site♦
appending `http://site/' to urlpos.

/drive/dirpath/site_dir/main.html: merge(`http://site/main', `http://site♦
appending `http://site-2/' to urlpos.

[End of file]

It really ends in the middle of a site speccification – quite a lot of data is appended to urlpos….

I have signaled it to Steven Scheweda to have a look, and when possible, I’ll try my way around in this program (not really familiar with C, but I’ll manage)