xdg-su

Hi all,

I am trying to run SUSE High Availability Extension.

When I go into yast → software management, I can see that there are still numerous packages that need to be installed, however I am not sure how to install them (error message is along the lines of: "nothing provides xxx needed by xxx)?

Another issue is that the cluster is installed (I think) as it has a shortcut on the desktop, however when I try to execute it comes up with the error message: "Failed to execute child process “xdg-su” (no such file or directory). Any ideas on overcoming this issue? Is there a way I can install it? (use zypper? if so, what would the command be?)

Responses are greatly appreciated :slight_smile:

Hi
HAE on which release of SLES?

The xdg-utils package should be installed, AFAIK by default?

Is the system registered to the online repositories?

For zypper use ‘in’ to install, ‘se’ to search etc (perhaps zypper
–help may be of use :wink: )


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 GNOME 3.10.1 Kernel 3.12.28-4-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

[QUOTE=malcolmlewis;25673]Hi
HAE on which release of SLES?

The xdg-utils package should be installed, AFAIK by default?

Is the system registered to the online repositories?

For zypper use ‘in’ to install, ‘se’ to search etc (perhaps zypper
–help may be of use :wink: )


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 GNOME 3.10.1 Kernel 3.12.28-4-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks![/QUOTE]

Hey Malcolm,

Thanks for the reply.

I am running SLES 11 SP3.

I am just double checking now that I am signed up to Novell for updates. If I am, would HAE packages also be updated under that novell licence? (using HAE 60day trial version)

Hi
It should be, but xdg-utils should be on the install medium.


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 GNOME 3.10.1 Kernel 3.12.28-4-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hey Malcolm

Tried another install, still no luck :confused:

I am trying to sign up for updates however I don’t receive an evaluation code from SUSE for the 60 day trial, so I cannot run updates.

is it possible to get xdg-su with zypper?

I tried the commands with zypper, however it does not seem to locate it?

sudo zypper in xdg-su (is this correct?)

Hi
Well it’s in the SLES11-SP3-Pool for x86_64 so should be on the DVD. I
don’t use sudo, but the command should work.

Have you checked on the install medium?

zypper lr
zypper se xdg-utils


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 GNOME 3.10.1 Kernel 3.12.28-4-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

hey malcolm

when I run zypper ls it says my version is SUSE 11-0. Does the ‘0’ indicate that I am not running SP3 by any chance? I have looked at numerous threads to determine how one determines SP version, but nothing seems to be concrete.

Also, when trying to run zypper se xdg-utils, I get the message “no packages found”

Thanks for the help so far mate

Hi
Sounds like it, you can confirm by running;

cat /etc/SuSE-release

What about active repositories from the command zypper lr?


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 GNOME 3.10.1 Kernel 3.12.28-4-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

[QUOTE=malcolmlewis;25701]Hi
Sounds like it, you can confirm by running;

cat /etc/SuSE-release

[/QUOTE]

The results are:
version: 11
patchlevel: 0

Guessing I am not on SP3? And thus why HAE won’t work as it is dedicated for SP3?

Is there a way to update to SP3? By the looks of it one has to activate online updates, which I am having issues with as SuSE does not provide a key/code for the 60 day trial (or i might be wrong?)?

On 09/01/2015 07:44, fredprinsloo wrote:
[color=blue]

The results are:
version: 11
patchlevel: 0

Guessing I am not on SP3? And thus why HAE won’t work as it is dedicated
for SP3?[/color]

If you were running SLES11 SP3 the patchlevel would show 3.

You need to match the HAE release to the SLES release so if you have
SLES11 (SP0) you need SLE11 (SP0) HAE although both are now unsupported.
[color=blue]

Is there a way to update to SP3? By the looks of it one has to activate
online updates, which I am having issues with as SuSE does not provide a
key/code for the 60 day trial (or i might be wrong?)?[/color]

Does the references to 60 day trial suggest that you have freshly
installed your server to test HAE or is this an old, existing server
which you now want to add HAE to?

If the former then I would suggest reinstalling afresh with SLES11 SP3 +
SLE11 SP3 HAE.

If the latter than the supported upgrade procedure to get from SLES11
(SP0) to SLES11 SP3 is by first upgrading to SLES11 SP1 then SLES11 SP2
before finally to SLES11 SP3.

HTH.

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

Hey folks

Thanks for all the help. I upgraded yesterday to version 12 which has solved my issues.

Now, I just have an issue with SBD. In particular:

" Do you wish to use SBD? [y/N] y
Path to storage device (e.g. /dev/disk/by-id/…) [] /dev/disk/by-id/sda3
That doesn’t look like a block device"

Also tried the suggested: /dev/disk/by-id/*

I partitioned 6GB (sda3) specifically for this function.

I can’t seem to get a hard drive accepted for SBD?

Any ideas?

Hi fredprinsloo,

“/dev/disk/by-id/sda3” doesn’t look like a typical block device id to me. Please check “ls -l /dev/disk/by-id” to identify the actual id of the “/dev/sda3” device and try that one.

A measure of last resort would be to use “/dev/sda3”, but since that name isn’t guaranteed to persist, I strongly advise against this for a production system.

Regards,
Jens

[QUOTE=jmozdzen;25761]Hi fredprinsloo,

“/dev/disk/by-id/sda3” doesn’t look like a typical block device id to me. Please check “ls -l /dev/disk/by-id” to identify the actual id of the “/dev/sda3” device and try that one.

A measure of last resort would be to use “/dev/sda3”, but since that name isn’t guaranteed to persist, I strongly advise against this for a production system.

Regards,
Jens[/QUOTE]

Hi Jens, thanks I got it to work now.

However, I have since decided to try a manual approach as the automatic mode was just not producing results. It is looking promising however when I try to launch the cluster, it does not cluster other nodes, only the host. For example:

When I run csync2 -xv

output is:

linux-srs2:~/Desktop # csync2 -xv Marking file as dirty: /etc/sysconfig/sbd Marking file as dirty: /etc/sysconfig/pacemaker Marking file as dirty: /etc/samba/smb.conf Marking file as dirty: /etc/lvm/lvm.conf Marking file as dirty: /etc/drbd.d Marking file as dirty: /etc/drbd.d/global_common.conf Marking file as dirty: /etc/drbd.conf Marking file as dirty: /etc/csync2/key_hagroup Marking file as dirty: /etc/csync2/csync2.cfg Marking file as dirty: /etc/corosync/corosync.conf Connecting to host linux-gk5a.local (SSL) ... Connect to 192.168.0.45:30865 (linux-gk5a.local). Adding peer x509 certificate to db: 3082030C30820275A003020102020900A87C97CD99CFA5E8300D06092A864886F70D010105050030819E310B3009060355040613022D2D3112301006035504080C09536F6D6553746174653111300F06035504070C08536F6D654369747931193017060355040A0C10536F6D654F7267616E697A6174696F6E31193017060355040B0C10536F6D654F7267616E697A6174696F6E3111300F06035504030C08536F6D654E616D65311F301D06092A864886F70D01090116106E616D65406578616D706C652E636F6D301E170D3135303131323133343835305A170D3233303333313133343835305A30819E310B3009060355040613022D2D3112301006035504080C09536F6D6553746174653111300F06035504070C08536F6D654369747931193017060355040A0C10536F6D654F7267616E697A6174696F6E31193017060355040B0C10536F6D654F7267616E697A6174696F6E3111300F06035504030C08536F6D654E616D65311F301D06092A864886F70D01090116106E616D65406578616D706C652E636F6D30819F300D06092A864886F70D010101050003818D0030818902818100A6D742534728126F8A1E55F0E7E997FE2C987C379B2CC04C6F926942AC202039B3A620DA4C16857AEC7C688766A13E9876F6EF6A77233520A22D0C5ED7B382B1FD4B21460961EC9D8D14C2CD75E701C9A0C3401D2ACE1A8797A06BF5FE9B888160E6B8E80C0A83DAFDB8351B4ED4FFF194CFC03344D40C9671E1F2AB40CDDFEB0203010001A350304E301D0603551D0E04160414EC0F63213F3BB7C9C748953ED74C93DEC7829DAD301F0603551D23041830168014EC0F63213F3BB7C9C748953ED74C93DEC7829DAD300C0603551D13040530030101FF300D06092A864886F70D010105050003818100050A839E6BFE9BF12E650EF45842061A4C569D42A8EC4A5AC95B6FCA62B78825D2548C6445AE3001EF22BAEAE4AE93EED0DAC9B9546E25DBF16ECBABFF5A2D80566616496CD8805AF0203C9843755FB9DD6F633E144D9DFB6F99FE66BB46B32A6C820AF514B306CA907D9A70E91D5A7CC6284A111D4D4028DE4B7B3A7C8A88D3 Connection closed. Finished with 1 errors.

Now I am not sure what the error is?

When I check to see which nodes are online:

[CODE]crm status
Last updated: Wed Jan 14 12:01:59 2015
Last change: Wed Jan 14 11:56:07 2015
Stack: corosync
Current DC: linux-srs2 (2130706433) - partition WITHOUT quorum
Version: 1.1.12-ad083a8
1 Nodes configured
0 Resources configured

Online: [ linux-srs2 ]
[/CODE]

Note the intended other node linux-gk5a.local is missing from online. I checked and made sure pacemaker is active on all systems.

Any ideas?

Hi fredprinsloo,

Note the intended other node linux-gk5a.local is missing from online. I checked and made sure pacemaker is active on all systems.

Any ideas?

there are quite a number of possible causes.

Is the cluster software started on the second node? Does its AIS configuration match that of the first node, especially the multicast port number? Waht does “crm status” report on the second node? Does the inter-system communication work, i.e. no firewalls inbetween, network ist multicast-capable, etc.? What’s in the logs (which are quite verbose per default, it may take a few moments to grasp what’s in there (and what’s important) if you’re new to these tools)?

Regards,
Jens

[QUOTE=jmozdzen;25800]> […] I checked and made sure pacemaker is active on all systems.

there are quite a number of possible causes.

Is the cluster software started on the second node? [/QUOTE]

skip that part of my questions :wink:

Regards,
Jens

[QUOTE=jmozdzen;25801]skip that part of my questions :wink:

Regards,
Jens[/QUOTE]

Hi Jens, please disregard the above. Figured I would give the automated feature another run, and it is looking really good!

I am now just having a problem with csync2 (see below code). I managed to get nodes connected to the cluster provided I manually copied over the coro files, so that is good progress! :). However, I fear that if the rest of the files are not copied over, there will be other issues down the line.

So as you can see I added hostname@ip as well to try and see if it is a networking issue of not being able to find the node, and it appears to work a little bit better (or is it?) as it seems csync is actually connecting to the client, but as for the rest, it does not want to connect.

Any ideas as to get csync2 to work? I suspect that I am SOOO close to getting this thing to work! :slight_smile:

[CODE]2015-01-15 13:05:29+08:00 /usr/sbin/ha-cluster-join

  • systemctl enable sshd.service
  • mkdir -m 700 -p /root/.ssh
  • mkdir -p /tmp/ha-cluster-ssh.6460
  • rm -f /tmp/ha-cluster-ssh.6460/\*
  • scp -oStrictHostKeyChecking=no root@192.168.0.3:/root/.ssh/id_\* /tmp/ha-cluster-ssh.6460/
  • mv /tmp/ha-cluster-ssh.6460/id_rsa /tmp/ha-cluster-ssh.6460/id_rsa.pub /root/.ssh/
  • rm -r /tmp/ha-cluster-ssh.6460
  • ssh root@192.168.0.3 ha-cluster-init ssh_remote
    Done (log saved to /var/log/ha-cluster-bootstrap.log)
  • rm -f /var/lib/csync2/linux-96bg.db3
  • ssh root@192.168.0.3 ha-cluster-init csync2_remote linux-96bg
  • scp root@192.168.0.3:/etc/csync2/\{csync2.cfg\,key_hagroup\} /etc/csync2
  • systemctl enable csync2.socket
  • ssh root@192.168.0.3 csync2\ -mr\ /\ \;\ csync2\ -fr\ /\ \;\ csync2\ -xv
    [COLOR="#FF0000"]Marking file as dirty: /etc/sysconfig/pacemaker
    Marking file as dirty: /etc/csync2/key_hagroup
    Marking file as dirty: /etc/corosync/corosync.conf[/COLOR]
    [COLOR="#FF0000"]Connecting to host linux-96bg.local (SSL) …
    Connect to 192.168.0.48:30865 (linux-96bg.local).
    Connecting to host linux-96bg (SSL) …[/COLOR]
    [COLOR="#FF0000"]Cannot resolve peername, getaddrinfo: Name or service not known
    Can’t create socket: Success[/COLOR]
    ERROR: Connection to remote host `linux-96bg’ failed.
    Host stays in dirty state. Try again later…
    [COLOR="#008000"]Connecting to host 192.168.0.48 (SSL) …
    Connect to 192.168.0.48:30865 (192.168.0.48).
    Adding peer x509 certificate to db: [/COLOR]3082030C30820275A003020102020900E454AE7693E66E4F300D06092A864886F70D010105050030819E310B3009060355040613022D2D3112301006035504080C09536F6D6553746174653111300F06035504070C08536F6D654369747931193017060355040A0C10536F6D654F7267616E697A6174696F6E31193017060355040B0C10536F6D654F7267616E697A6174696F6E3111300F06035504030C08536F6D654E616D65311F301D06092A864886F70D01090116106E616D65406578616D706C652E636F6D301E170D3135303131353039313032385A170D3233303430333039313032385A30819E310B3009060355040613022D2D3112301006035504080C09536F6D6553746174653111300F06035504070C08536F6D654369747931193017060355040A0C10536F6D654F7267616E697A6174696F6E31193017060355040B0C10536F6D654F7267616E697A6174696F6E3111300F06035504030C08536F6D654E616D65311F301D06092A864886F70D01090116106E616D65406578616D706C652E636F6D30819F300D06092A864886F70D010101050003818D0030818902818100D8C0E5FF566F60C1422616E428D2704D1AEC5E3F5D108FC7C6F7A06D16D3AE12F1E8799A2BC92F474E9934CE23E08869D936F0D45431C55D8BD1F83EF8670247A4DBF85F0D1AC92359F3F6593BB81EA16D0073AA217F0D105FC7B91427394B2FE66CA281D6A591FE7E73930404D6BAB976EB24EB5C43F7DE353C18BE3956051B0203010001A350304E301D0603551D0E041604140ABED1D329CF3644D6E743895AF766E8DBD8E502301F0603551D230418301680140ABED1D329CF3644D6E743895AF766E8DBD8E502300C0603551D13040530030101FF300D06092A864886F70D010105050003818100D636B3169ED3D2DE549F54FB1A1900DA5D0A8C867A9E27AD5D992D8695381405AFEE79F5460F7D8BD03D2A69448E62BD39B663EBA43B6858B014A2E60BEC451E4E6608B1826D9238A5C619DCFAA4B43021B64111298D43C6E1FDB76EB3648BD675A297FECB19474F979649F7E9E85BE29D7DE2792D1990A990AEB43302E06F4E
    Connection closed.
    [COLOR="#FF0000"]Finished with 3 errors.[/COLOR]
    [COLOR="#FF0000"]WARNING: csync2 run failed - some files may not be sync’d[/COLOR]
  • mkdir -p /tmp/ha-cluster-pssh.6460
  • rm -f /tmp/ha-cluster-pssh.6460/\*
  • pssh -H linux-96bg -H linux-96bg@192.168.0.48 -H linux-a9rq -O StrictHostKeyChecking=no -o /tmp/ha-cluster-pssh.6460 cat /root/.ssh/known_hosts
    [COLOR="#FF0000"][1] 13:06:05 [FAILURE] linux-a9rq Exited with error code 255
    [2] 13:06:05 [SUCCESS] linux-96bg
    [3] 13:06:07 [FAILURE] linux-96bg@192.168.0.48 Exited with error code 255[/COLOR]
    WARNING: known_hosts collection may be incomplete
  • pscp -H linux-96bg -H linux-96bg@192.168.0.48 -H linux-a9rq -O StrictHostKeyChecking=no /root/.ssh/known_hosts.new /root/.ssh/known_hosts
    [1] 13:06:07 [FAILURE] linux-a9rq Exited with error code 1
    [2] 13:06:07 [SUCCESS] linux-96bg
    [3] 13:06:14 [FAILURE] linux-96bg@192.168.0.48 Exited with error code 1
    WARNING: known_hosts merge may be incomplete
  • rm /root/.ssh/known_hosts.new
  • rm -r /tmp/ha-cluster-pssh.6460
  • partprobe /dev/sda
  • sleep 5
    : created /etc/corosync/corosync.conf.6460 with content:
  • sync
  • mv -f /etc/corosync/corosync.conf.6460 /etc/corosync/corosync.conf
  • csync2 -m /etc/corosync/corosync.conf
  • csync2 -f /etc/corosync/corosync.conf
  • csync2 -xv /etc/corosync/corosync.conf
    Marking file as dirty: /etc/corosync/corosync.conf
    Connecting to host linux-a9rq (SSL) …
    Cannot resolve peername, getaddrinfo: Name or service not known
    Can’t create socket: Success
    ERROR: Connection to remote host `linux-a9rq’ failed.
    Host stays in dirty state. Try again later…
    Connection closed.
    Finished with 1 errors.
    : created /etc/sysconfig/SuSEfirewall2.d/services/cluster.6460 with content:

Name: Cluster

Description: Opens ports for Varies Cluster related services

space separated list of allowed TCP ports

30865 for csync2

5560 for mgmtd

7630 for hawk

21064 for dlm

TCP=“30865 5560 7630 21064”

space separated list of allowed UDP ports

UDP=""

space separated list of allowed RPC services

RPC=""

space separated list of allowed IP protocols

IP=“igmp”

space separated list of allowed UDP broadcast ports

BROADCAST=""

  • sync
  • mv -f /etc/sysconfig/SuSEfirewall2.d/services/cluster.6460 /etc/sysconfig/SuSEfirewall2.d/services/cluster
    : Resetting password of hacluster user
  • rm -f /var/lib/heartbeat/crm/\* /var/lib/pacemaker/cib/\*
  • systemctl enable hawk.service
    WARNING: You should change the hacluster password to something more secure!
  • systemctl disable sbd.service
  • systemctl enable pacemaker.service
  • systemctl start pacemaker.service
    A dependency job for pacemaker.service failed. See ‘journalctl -xn’ for details.
    ERROR: Failed to start pacemaker.service[/CODE]

Thanks for the help so far guys!

Hi fredprinsloo,

[QUOTE][COLOR=#FF0000]Connecting to host linux-96bg (SSL) …[/COLOR] [COLOR=#FF0000]
Cannot resolve peername, getaddrinfo: Name or service not known
Can’t create socket: Success[/COLOR]
ERROR: Connection to remote host `linux-96bg’ failed.[/QUOTE]

that sounds like a DNS setup problem - make sure that all node are able to resolve their peer nodes’ names to IP addresses…

Regards,
Jens