CCISS support was dropped in SLE-12. I see RHEL/CENT 7 have an elrepo that allows one to still download the old drivers so that I can use my old HP Proliant ML 350 G5 system. Does Suse have a similar repo (whether it is considered supported or not isn’t a concern of mine)? We are doing some software testing and I would prefer to put SLES 12 on this system instead of the current SLES 11.
jdh239 Wrote in message:
[color=blue]
CCISS support was dropped in SLE-12. I see RHEL/CENT 7 have an elrepo
that allows one to still download the old drivers so that I can use my
old HP Proliant ML 350 G5 system. Does Suse have a similar repo
(whether it is considered supported or not isn’t a concern of mine)? We
are doing some software testing and I would prefer to put SLES 12 on
this system instead of the current SLES 11.[/color]
Which storage controller do you have in your ML350 G5 system?
cciss has been replaced by hpsa although that doesn’t support the
same controllers as cciss - see list @ http://c
ciss.sourceforge.net/ if you haven’t already.
For a document from HP on installing SLES12 on HP ProLiant servers
see also http://h20195.www2.hp.com/V2/getpdf.aspx/4AA5-4763ENW.pdf
HTH.
Simon Flood
SUSE Knowledge Partner
----Android NewsGroup Reader----
http://usenet.sinaapp.com/
Hi to all,
after struggling for a while, I’ve managed to upgrade a SLES 11 SP3 install to SLES 12 SP1 on a Proliant BL460c G1 blade inside a C7000 enclosure. G1 blades are rather old, and uses a HP E200i RAID controller that’s quite old also, and explicitly not supported by the hpsa driver provided by SLES 12 (which does not provide the cciss driver anymore). But they are still very useful for testing/development purposes, and I would have to ‘throw away’ a lot of them (16 exactly) when our production servers start to use SLES 12 (which is starting to happen right now).
Summarizing the steps:
- Did a fresh install of SLES 11 SP3; you can’t start from SP4 because, AFAIK, it does not provide the cciss driver (SP3 being the latest revision who provides it).
- Did a update to 11 SP4, and it kept the cciss driver;
- Did a reboot of the install, and on boot parameters for the 11 SP4 on the GRUB menu, added “cciss.cciss_allow_hpsa=1 hpsa.hpsa_allow_any=1” (without quotes).
The first parameter tells the cciss driver to get out of way and let the hpsa driver to try and find the RAID controller; the second parameter tells the hpsa driver to take hold of the RAID controller even if it’s not capable of recognizing its model (being the case with the E200i, which is officially not supported by it). Luckily for me it did, and succesfully activated it.
- At this point, your install boots normally, but all the references to the drivers in the old cciss ‘style’ (eg. , in my case, /dev/disk/by-id/cciss-3600508b1001037353420202020200007-part1 for the swap partition and /dev/disk/by-id/cciss-3600508b1001037353420202020200007-part2 for the ‘/’ partition) are replaced by by SCSI names, since the cciss driver is a block driver and hpsa is a SCSI driver - in my case,
/dev/disk/by-id/cciss-3600508b1001037353420202020200007-part1 became /dev/sda1
and
/dev/disk/by-id/cciss-3600508b1001037353420202020200007-part1 became /dev/sda2
- Because of that, before you try to upgrade to 12 SP1, you must change all the references to disks/partitions in the install from the old cciss style name to the new hpsa style, or the new driver will not recognize your partitions during the upgrade process because they will be using names that it cannot understand (‘cciss’ names).
In my case, I just changed the references on /etc/fstab and rebooted - during the upgrade routine a new /boot/grub2 directory was created, leaving /boot/grub menu untouched with all the old cciss-style references to the disks, and creating new entries in the /boot/grub2/grub.cfg referencing the new SCSI style names.
- Lastly, I booted the 12 SP1, CD, choose ‘Upgrade’ and, before hit enter, added again the “hpsa.hpsa_allow_any=1” parameter on its boot options, so the kernel used during the install process could again recognize the controller. Since /etc/fstab was already modified to SCSI names, it was able to find the 11 SP4 install and upgrade it.
Another small quirk after the upgrade - I don’t understand why, but on SLES 11 the X server (or gdm) was able to accept multiple sessions with the same user (‘root’ in my case) - so I was able to use the local screen and vnc connections at the same time. On SLES 12, it apparently accepts only one session per user - so I must logout from the local session to start a vnc connection to the server. Not critical for me.
That’s it, I hope this will be useful to people who owns ‘old’ but still useful HP servers using unsupported HP RAID controllers.
Remembering, of couse, that it’s very likely to make the install unsupported by Suse (since you’re using an unsupported controller), and this should be used only for testing/development (not production) environments.
Cheers,
Luis Derani
ALESP - Brazil
[QUOTE=smflood;28575]jdh239 Wrote in message:
[color=blue]
CCISS support was dropped in SLE-12. I see RHEL/CENT 7 have an elrepo
that allows one to still download the old drivers so that I can use my
old HP Proliant ML 350 G5 system. Does Suse have a similar repo
(whether it is considered supported or not isn’t a concern of mine)? We
are doing some software testing and I would prefer to put SLES 12 on
this system instead of the current SLES 11.[/color]
Which storage controller do you have in your ML350 G5 system?
cciss has been replaced by hpsa although that doesn’t support the
same controllers as cciss - see list @ http://c
ciss.sourceforge.net/ if you haven’t already.
For a document from HP on installing SLES12 on HP ProLiant servers
see also http://h20195.www2.hp.com/V2/getpdf.aspx/4AA5-4763ENW.pdf
HTH.
Simon Flood
SUSE Knowledge Partner
----Android NewsGroup Reader----
http://usenet.sinaapp.com/[/QUOTE]
Hi Luis
thanks for the hint, works great. After updating and upgrading to the newest service pack, “ssh -X” works too.
regards
Manfred
[QUOTE=lderani;34855] …
Another small quirk after the upgrade - I don’t understand why, but on SLES 11 the X server (or gdm) was able to accept multiple sessions with the same user (‘root’ in my case) - so I was able to use the local screen and vnc connections at the same time. On SLES 12, it apparently accepts only one session per user - so I must logout from the local session to start a vnc connection to the server. Not critical for me.
Luis Derani
ALESP - Brazil[/QUOTE]