SES 3 installation issues

Thread for marking and discussing SUSE Enterprise Storage issues in a process of installation and supplemental manuals also.

Link for the quick manual is here

2.1.2 Preparing Each Ceph Node

NTP already installed.

OpenSSH already installed and running.

ses01:~ # useradd -m ceph
useradd: user ‘ceph’ already exists

ses01:~ # su - ceph
This account is currently not available.

The user ceph exists, but with login disabled and nologin shell set.
Solution: run YAST2, go to “User and Group Management”, filter system users, edit user “ceph”, remove “Disable User login” mark and set login shell to /bin/bash.

ceph@ses01:~> ssh-copy-id ceph@172.18.66.16
The authenticity of host ‘172.18.66.16 (172.18.66.16)’ can’t be established.
ECDSA key fingerprint is 04:55:4b:a6:42:b4:0f:db:86:b9:4b:3b:c0:ab:5e:5f [MD5].
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed – if you are prompted now it is to install the new keys
Password:
Password:
Password:
Permission denied (publickey,keyboard-interactive).

How to resolve that in better way, and why this is appeared if everything installed just out of the box?

Solution: run YAST2, go to “User and Group Management”, filter system users, edit user “ceph”, set the unified known password and use this password when prompted from ssh-copy-id utility.

Run ceph-deploy to install Ceph on each node

[I]…
[ceph_deploy.install][DEBUG ] Detecting platform for host ses02 …
[ses02][DEBUG ] connection detected need for sudo

sudo: no tty present and no askpass program specified
[ses02][DEBUG ] connected to host: ses02
[ceph_deploy][ERROR ] RuntimeError: remote connection got closed, ensure requiretty is disabled for ses02[/I]

Solution: DO NOT use visudo and add “Defaults:ceph !requiretty” line, as written in the Ceph quick-deploy book and SES manual!
Use YaST2 instead, “Security and Users”, “Sudo” and add user ceph to run all commands from root without password. I dont know why the direct editing via visudo did not affected the system, but YaST2 made all changes fine.

Already installed in the system.

[SIZE=1][/SIZE]Loading repository data…
Reading installed packages…
‘calamari-clients’ not found in package names. Trying capabilities.
‘romana’ providing ‘calamari-clients’ is already installed.
Resolving package dependencies…
Nothing to do.

On 15/06/16 11:44, polezhaevdmi wrote:
[color=blue]

Thread for marking and discussing SUSE Enterprise Storage issues in a
process of installation and supplemental manuals also.

Link for the quick manual is ‘here’
(https://www.suse.com/documentation/ses-1/book_storage_admin/data/ceph_install_ceph-deploy.html)

2.1.2 ‘Preparing Each Ceph Node’
(https://www.suse.com/documentation/ses-1/book_storage_admin/data/ceph_install_ceph-deploy.html#ceph_install_ceph-deploy_eachnode)[/color]

Note the above URLs are for SUSE Enterprise Storage version 1.0
documentation and not version 3.0 that your subject line indicates.

For SES 3.0 you should follow
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_install_ceph-deploy.html
[color=blue][color=green]

Install NTP.[/color]
NTP already installed.
[color=green]
Install SSH server.[/color]
OpenSSH already installed and running.
[color=green]
Add a ceph user.[/color]
-ses01:~ # useradd -m ceph
useradd: user ‘ceph’ already exists-
[color=green]
6. On the admin node, become the ceph user,[/color]
-ses01:~ # su - ceph
This account is currently not available.-
The user ceph exists, but with login disabled and nologin shell set.
Solution: run YAST2, go to “User and Group Management”, filter system
users, edit user “ceph”, remove “Disable User login” mark and set login
shell to /bin/bash.[/color]

If you are following SES 1.0 documentation (which the above steps would
suggest) this would explain your difficulties.

HTH.

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

Now open your Web browser and point it to the hostname/IP address of the server where you installed Calamari.

Calamari doesn’t respond.

[SIZE=1]Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Additionally, a 503 Service Unavailable error was encountered while trying to use an ErrorDocument to handle the request.
[/SIZE]
Something else should be done prior to access Calamari webpage…

Yes, Simon, You are right!
Let me laugh for a couple of hours :slight_smile:

Investigated a bit more (with an actual manual).

  • Calamari server and calamari ceph patterns are installed.
  • Effect doesn’t depend of firewall on or off.
  • YaST2 firewall management conflicts with SuSEfirewall2, controlled from command line.
  • YaST2 reports HTTP server is not running, meaning Apache. Is that neccessary to use Calamari ceph administration panel?

I am getting an error when trying to install SES 3.0 on virtualbox.

"Internal error:Please report a bug report with logs. Details: Input/Output error @io_fillbuf fd:28 /mounts/mp_001/usr/share/YaST2/include/partitioning/custom_parts_dialog.rb Caller: /usr/lib64/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in ‘require’

But having said that I have managed to install one node.

I need two more nodes to setup a SES Cluster. Please help!

May be, it will be easier to clone (using GHOST, Acronis or VM image) first installed node and change system settings for new nodes? As for me, I’m installing in VMWare 6 update 1 environment and had no issues with OS or storage extensions installation.

OK, the second attempt from scratch. Trying to setup the system with automated deployment via crowbar. Planned the virtual structure as follows.
DNS domain: ses.lab.local
Nodes:
admin (as admin machine, master DNS and NTP gate to i-net)
deploy (for crowbar)
mon01, mon02, mon03 (secondary DNS and NTP also)
osd01, osd02, osd03, osd04
Networks:
172.18.64.0/255.255.252.0 - physical network with access to i-net via proxy
192.168.124.0/255.255.255.0 - virtual network for internal SES communications

Each node has 2 virtual NICs:
E1000 - to connect to 172.18.64.0 network via virtual switch with link to external LAN with DHCP
VXNET3 - to connect to 192.168.124.0 network inside virtual switch only, all IPs are manually assigned
The VXNET3 card of the deploy.ses.lab.local node has address 192.168.124.10 to siplify the crowbar setup.

Links
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_install_crowbar.html
https://www.suse.com/documentation/suse-cloud-5/book_cloud_deploy/data/sec_depl_adm_inst_crowbar.html

The first note: do NOT install deploy node with text interface only (Base System or something else), as You will waste a lot of time to fix a lot of crowbar initialization problems. I reinstalled the node with graphics and all previous problems were gone.

The crowbar installation script (screen install-ses-admin) reports error “crowbar service is not running”.
Attempt to start the crowbar service (YAST2 / Services Manager) is unsuccessfull, the status file puma.state is not found. Who forgot to put the file at the right place? :slight_smile:
The installation log is attached. [ATTACH]161[/ATTACH]

As the crowbar seems the separate realm of madness, it’s much better to start with ceph-deploy. This will bring furter and faster.
OK, Ceph, RADOS Gateway and Calamari operating fine. Now it’s time for iSCSI.
As I see, the pools, created with erasure coding use, are unsuitable for block storage image placement.
Am I right? If yes, why the erasure code is so discriminized?

[CODE]node01:/home/cephadm # ceph osd pool create rbd01 8 8 erasure
pool ‘rbd01’ created

node01:/home/cephadm # rbd --pool rbd01 create --size=102400 iscsistor01
2016-07-13 16:41:54.110215 7f7308d7fd80 -1 librbd: error adding image to directory: (95) Operation not supported
rbd: create error: (95) Operation not supported

node01:/home/cephadm # ceph osd pool delete rbd01 rbd01 --yes-i-really-really-mean-it
pool ‘rbd01’ removed

node01:/home/cephadm # ceph osd pool create rbd01 8 8
pool ‘rbd01’ created

node01:/home/cephadm # rbd --pool rbd01 create --size=102400 iscsistor01

node01:/home/cephadm # rbd --pool rbd01 ls
iscsistor01[/CODE]

Trying to mount CephFS at ‘admin node’ found the kernel client modules for CephFS are not installed and there is no any appropriate package to install from SES3 distribution media. So, both mount -t ceph and mount.ceph commands are unsuccessfull.

‘modprobe ceph’ reports ‘FATAL: Module ceph not found’.

The SES3 documentation (15.2.2 “Mounting CephFS”) and DVD should be revised to correspond each other.

How to solve that issue? Try to compile client module from https://github.com/ceph/ceph-client, or do something else?

The SES3 installation media appeared to contain the modules with bug, which is very old.
The Calamari tries to connect Oracle at 172.16.79.128/7003 and 172.16.79.128/7002, which makes our CISCO ASA - angry, and network security officer - insane. :slight_smile:

[QUOTE]Jul 25 2016 11:53:57: %ASA-4-106023: Deny tcp src cc-inside:172.18.65.142/58969 dst outside:172.16.79.128/7003 by access-group “CC-INSIDE-IN” [0xe3955a04, 0x0]
Jul 25 2016 11:53:57: %ASA-4-106023: Deny tcp src cc-inside:172.18.65.142/56106 dst outside:172.16.79.128/7002 by access-group “CC-INSIDE-IN” [0xe3955a04, 0x0][/QUOTE]
The bug report link is here. It opened 1 year ago and completed in 4 months.
As the evaluation registration via proxy was unsuccessfull, thus, the actual solution is: edit the file /usr/lib/python2.7/site-packages/cthulhu/manager/notifier.py and change the 172.18.65.142 addresses with 127.0.0.1, than, restart host with Calamari.

Yes, I’m right and the problem is so severe, it doesn’t allow developers to allow a lot of operations with erasure-coded pools: they conflicting with the object nature of Ceph.
http://ceph-users.ceph.narkive.com/eRAVMAu1/why-the-erasure-code-pool-not-support-random-write
Classic storage wins this fight.