Monthly maintenance in itself showed no problems. Mail was as usual:
PMAS statistics for February
Total messages : 2456 = 100.0 o/o
DNS Blacklisted : 0 = .0 o/o (Files: 0)
Relay attempts : 1280 = 52.1 o/o (Files: 28)
Accepted by PMAS : 1176 = 47.8 o/o (Files: 28)
Handled by explicit rule
Rejected : 842 = 71.5 o/o (processed), 34.2 o/o (all)
Accepted : 116 = 9.8 o/o (processed), 4.7 o/o (all)
Handled by content
Discarded : 161 = 13.6 o/o (processed), 6.5 o/o (all)
Quarantained : 49 = 4.1 o/o (processed), 1.9 o/o (all)
Delivered : 8 = .6 o/o (processed), .3 o/o (all)
but there has been something strange in the antirelay logfiles: between 10-Fen and today, these are a mixture of PMAS.log content and PMAS_ANTIRELAY.LOG. All files were therefore over 60 blocks in size. Searching through these files for the antirelay status (571) held just two files that actually required inspection: the ones of 11th and 25th:
11-FEB-2018 between 21:24:54.19 and 21:28:00.87, from address 220.127.116.11 (190 entries)
25-FEB-2018 between 20:29:42.78 and 25-FEB-2018, from address 18.104.22.168 (146 entries).
In both cases, used a bogus user from this domain, trying to relay to email@example.com. Once again, hosted by Hostwinds.com; so I’ll warn them again.
Last Itanium system into cluster
The two ‘big’ servers (IRIS and INDRA) are added to the cluster already, but one (INGE) was still to do.
Other than I initially thought, these two big servers have two single-core processors, without hyperthreading. Being similar in hardware (and from the same supplier), it looks these are the older ones of the three. That only CPU #1 is added to the set of CPU’s, is therefore obvious.
INGE, once booted, shows that CPU’s #1, 2 and 3 are added, so it is a newer one. However, it is the one causing problems. I had the machine running when I added the other systems but I couldn’t do that one because before, there as an issue with the SCSI controller so the machine didn’t boot when the system disk was in slot 0 (PUN0, LUN 0) – where is was when I installed VMS on it. Moved it to slot # 2 (PUN2, LUN 0) where I could boot is when accessing EFI and started it by accessing this disk directly and start FSO:\EFI\VMS\VMS_LOADER). But on the EFI boor menu, it was still residing on PUN0 LUN0. So I need to do it interactively.
However, trying to access the ILO yesterday failed: The ILO was non-responsive to PING, TELNET and HTTP on the designated address. So today it was a matter of finding out the cause. Thinking it might be the battery failing (though the supplier said he replaced it) so I took the IKO out, put in a new battery, and re-installed it. Since DHCP is enabled by default, I could figure out what address it would have – but on DIANA,
$ DHCPDBDUMP didn’t show any address that did work.
$ TCPIP SHO HOST gave me a hint – since the name of the management port is MP<macaddress> now I could access MP, set the configuration to what it should be, and work from there.: Booted the machine from bay#2 – and that worked. Next, shutdown the machine to find out what message was given if the disk was in slot#0 – where it should be. This time, there was nothing wrong; perhaps because when the ILO was out, I could check the connector, may have moved it a bit… SO I added the machine to the cluster. Now being able to access the disk containing the new licenses, this machine is set to work until 12-Mar-2019 – like all others.
Hopefully, this wail keep working – fingers crossed.
Next stage is setting this machine up (as well as INDRA) as DIANA and IRIS, and a quorum node on a laptop running a 64-bit version of Windows (so not the current one).
Fan bearings gone
The main fan of the DS10 server (DIANA) is getting very loud – sounds the bearings are gone. A new one has been ordered but it is uncertain when it will arrive….