Monday, February 27, 2023

Typosquatting a New Malicious Python Packages in PyPi

PyPI (Python Package Index) is the official repository for Python packages. It is used by developers and users worldwide to find and install Python packages. However, PyPI has been targeted by attackers who uploaded malicious packages to the repository.

Trojanized PyPI packages are Python packages that have been modified by attackers to include malicious code. These packages are usually uploaded with names similar to popular packages, so users might not notice the difference. When users download and install these packages, the malicious code gets executed on their systems, and attackers can use it to steal data or take control of the affected systems.

Cybersecurity researchers are warning of "imposter packages" mimicking popular libraries available on the Python Package Index (PyPI) repository.

The 41 malicious PyPI packages have been found to pose as typosquatted variants of legitimate modules such as HTTP, AIOHTTP, requests, urllib, and urllib3. The names of the packages are as follows:

aio5, aio6, htps1, httiop, httops, httplat, httpscolor, httpsing, httpslib, httpsos, httpsp, httpssp, httpssus, httpsus, httpxgetter, httpxmodifier, httpxrequester, httpxrequesterv2, httpxv2, httpxv3, libhttps, piphttps, pohttp, requestsd, requestse, requestst, ulrlib3, urelib3, urklib3, urlkib3, urllb, urllib33, urolib3, xhttpsp

Finally, as developers should frequently conduct security assessments of third-party libraries and other dependencies in their code.  as Valentić from reversinglabs say.

PyPI advised any users who think they've been compromised to contact security@pypi.org with details about the sender email address and URL of the malicious site to help administrators to respond to this issue.

Here is a simple python script, i deployed via ansible, i used pkg_resources.get_distribution() to check if some of thoose 41 packages are installed.

Friday, April 29, 2022

systemd-tmpfiles to manage temporary files and directories on CentOS/RHEL

A modern system requires a large amount of temporary files and directories. Some applications (and users) use the /tmp directory to store temporary data, while others use a more specific location for the task, such as daemon and user-specific volatile directories in /run. In this context, volatile means that the file system that stores these files only exists in memory. When the system restarts or loses power, all content in volatile storage will disappear.

To keep a system running smoothly, it's necessary for these directories and files to be created when they don't exist, as daemons and scripts may rely on them being there, and for old files to be deleted so they don't fill up disk space or provide incorrect information. CentOS/RHEL 7 and later versions include a new tool called systemd-tmpfiles, which provides a structured and configurable method for managing temporary files and directories. This service is run by a timer unit that queries the systemd's temporary daemon and runs it 15 minutes after the system startup and then every 24 hours from that moment.



The configuration files are located in different places and follow a hierarchical priority process, with the files having the following priority order.
1.   /etc/tmpfiles.d/*.conf
2.   /run/tmpfiles.d/*.conf
3.   /usr/lib/tmpfiles.d/*.conf
The files in /usr/lib/tmpfiles.d/ are provided by relevant RPM packages and should not be edited.
The files under /run/tmpfiles.d/ are themselves volatile files, typically used by daemons to manage their own runtime temporary files.
The files in /etc/tmpfiles.d/ are intended for administrators to configure custom temporary locations and override default values provided by the vendor.
If a file in /run/tmpfiles.d/ has the same file name as a file in /usr/lib/tmpfiles.d/, then the file in /run/tmpfiles.d/ is used. If a file in /etc/tmpfiles.d/ has the same file name as a file in /run/tmpfiles.d/ or /usr/lib/tmpfiles.d/, then the file in /etc/tmpfiles.d/ is used.

Friday, July 31, 2020

Systemd automatically unmount filesystem mounted

A curious thing in RHEL7 (it has kind of a cache in the systemd for mounted FS)

To recreate the problemto i ha´d to change the LV linked to the FS /opt/MyfslogSD and /opt /MyfsWalComprSD after the changes in my the fstab file i re-mount the FS, but checking the log i realise that the systemd tries to unmount the FS over and over again until hours later, when there are no processes running inside.

Jul 2 14:54:17 Server1 systemd: Unit opt-MyfslogSD.mount is bound to inactive unit dev-vgdata-lvMyfslogSD.device. Stopping, too.
Jul 2 14:54:17 Server1 systemd: Unmounting /opt/MyfslogSD...
Jul 2 14:54:17 Server1 umount: (In some cases useful info about processes that use
Jul 2 14:54:17 Server1 umount: the device is found by lsof(8) or fuser(1))
Jul 2 14:54:17 Server1 systemd: opt-MyfslogSD.mount mount process exited, code=exited status=32
Jul 2 14:54:17 Server1 systemd: Failed unmounting /opt/MyfslogSD.
Jul 2 14:54:17 Server1 systemd: Unit opt-MyfslogSD.mount is bound to inactive unit dev-vgdata-lvMyfslogSD.device. Stopping, too.
Jul 2 14:54:17 Server1 systemd: Unmounting /opt/MyfslogSD...
Jul 2 14:54:17 Server1 umount: (In some cases useful info about processes that use
Jul 2 14:54:17 Server1 umount: the device is found by lsof(8) or fuser(1))
Jul 2 14:54:17 Server1 systemd: opt-MyfslogSD.mount mount process exited, code=exited status=32
Jul 2 14:54:17 Server1 systemd: Failed unmounting /opt/MyfslogSD.


[root@Server1 ~]# systemctl --all | grep opt-Postgres
opt-MyfsBackupSD.mount                        loaded    active   mounted   /opt/MyfsBackupSD
opt-MyfsDataSD.mount                          loaded    active   mounted   /opt/MyfsDataSD
opt-MyfsLogSD.mount                           loaded    active   mounted   /opt/MyfsLogSD
opt-MyfsScriptsSD.mount                       loaded    active   mounted   /opt/MyfsScriptsSD
opt-MyfsWalComprSD.mount                      loaded    inactive mounted   /opt/MyfsWalComprSD

opt-MyfslogSD.mount                           loaded    inactive mounted   /opt/MyfslogSD



After altering fstab one should either run systemctl daemon-reload (this makes systemd to reparse /etc/fstab and pick up the changes) or reboot.

[root@Server1 ~]# systemctl --all | grep opt-Postgres
opt-MyfsBackupSD.mount                        loaded    active   mounted   /opt/MyfsBackupSD
opt-MyfsDataSD.mount                          loaded    active   mounted   /opt/MyfsDataSD
opt-MyfsLogSD.mount                           loaded    active   mounted   /opt/MyfsLogSD
opt-MyfsScriptsSD.mount                       loaded    active   mounted   /opt/MyfsScriptsSD
opt-MyfsWalComprSD.mount                      loaded    active   mounted   /opt/MyfsWalComprSD

Thursday, June 25, 2020

Replacing a Boot Mirrored Disk in HP-UX 11.31 (11i v3)

Initialize boot information on the replacement disk.

Save the hardware paths to the disk.
MyHPUX01:(/root/home/root)(root)#ioscan -m lun /dev/disk/disk7
Class     I  Lun H/W Path  Driver  S/W State   H/W Type     Health  Description
======================================================================
disk      7  64000/0xfa00/0x1   esdisk  CLAIMED     DEVICE       online  HP      DG146BB976 
             0/4/1/0.0x5000c5000c7bc53d.0x0
                      /dev/disk/disk7      /dev/disk/disk7_p3   /dev/rdisk/disk7_p2
                      /dev/disk/disk7_p1   /dev/rdisk/disk7     /dev/rdisk/disk7_p3
                      /dev/disk/disk7_p2   /dev/rdisk/disk7_p1


In My case, the disk to be replaced is at lunpath hardware path
LUN hardware path is 64000/0xfa00/0x1
lunpath hardware path is 0/4/1/0.0x5000c5000c7bc53d.0x0

disk is hot-swappable

Halt LVM access to the disk.

#pvchange -a N /dev/disk/disk7_p2


Determine the new LUN instance number for the replacement disk.
# ioscan -m lun

- Create a partition description file:
# vi /tmp/partitionfile
3
EFI 500MB
HPUX 100%
HPSP 400MB

idisk -wf /tmp/partitionfile /dev/rdisk/disk-newdisk-


           -w   Enable write mode.  By default, idisk operates in read-only
                mode.  To create and write partition information to the disk
                you must specify the -w option.



- Create the new device files for the new partitions (disk28_p1,_p2_p3)
# insf -e Cdisk

#you should see the numbre of partition
# ioscan -m lun


Now assign the old instance number to the replacement disk.
# io_redirect_dsf -d /dev/disk/disk-old- -n /dev/disk/disk-new-

# ioscan -m lun /dev/disk/disk-new-

The LUN representation of the old disk with LUN hardware path 64000/0xfa00/0x0 was
removed. The LUN representation of the new disk with LUN hardware path
64000/0xfa00/0x1c was reassigned from LUN instance disk-new- to LUN instance 14 and its device
special files were renamed as /dev/disk/disk14 and /dev/rdisk/disk14.


#Use efi_fsinit(1M) to initialize the FAT filesystem on the EFI partition:

efi_fsinit -d /dev/rdisk/disk7_p1**

efi_fsinit -d /dev/rdisk/disk7_p3

mkboot -e -l /dev/rdisk/disk7
efi_ls -d /dev/rdisk/disk7_p1

(to check EFI)
lifls -l /dev/rdisk/disk7_p2


(to check LIF)
- Check the content of AUTO file on EFI partition:

# efi_cp -d /dev/rdisk/disk7_p1 -u /EFI/HPUX/AUTO /tmp/x
# cat /tmp/x
boot vmunix
NOTE: Specify the -lq option if prefer that your system boots up without
interruption in case of a disk failure:
on the original boot disk:
# mkboot -a "boot vmunix -lq" /dev/rdisk/disk7


Restore LVM configuration information to the new disk.

For example:

# vgcfgrestore -n /dev/vg00 /dev/rdisk/disk7_p2

10. Restore LVM access to the disk.
If you did not reboot the system in Step 2, reattach the disk as follows:

# vgchange -a y /dev/vg00
# vgdisplay -v vg00
# vgdisplay -v vg00

Syncronize volume group data (only if sync does not start automatically):

# cd /tmp
# nohup vgsync /dev/vg00 &
(output see /tmp/nohup.out)

11. Initialize/check boot information on the disk.
- Check if content of LABEL file (i.e. root, boot, swap and dump device definition) has been
initialized (done by lvextend) on the mirror disk:

# lvlnboot -v

Thursday, September 5, 2019

Setting up a DNS server in Centos 7


The configuration of a DNS server in Linux Centos 7 is very simple. First of all, we will have to install the bind product, with the following command:

yum -y install bind bind-utils

Next, in the file /etc/named.conf, we define the service area of what we want to solve. In this case, I want to resolve the service namemain.webserver.local:



zone “webserver.local” IN {type master;file “forward.webserverlocal.db“;allow-update { none; };};zone “2.0.0.10.in-addr.arpa” IN {type master;file “reverse.webserverlocal.db“;allow-update { none; };};

In the file forward.webserverloca.db we define the IPs and the names of the service. That is, a service can point to several servers as can happen with google.com:

C:\Users\MyPC>nslookup www.google.comServer: resolver.hp.netAddress: 16.110.135.51Non-authoritative answer:Name: www.google.comAddresses: 2607:f8b0:4000:815::200474.125.195.10574.125.195.14774.125.195.9974.125.195.10474.125.195.10674.125.195.103

let´s take a look to our file “forward”:

[root@Centos7 ~]# cat /var/named/forward.webserverlocal.db$TTL 86400@ IN SOA maindns.webserver.local. root.webserver.local. (2011071001 ;Serial3600 ;Refresh1800 ;Retry604800 ;Expire86400 ;Minimum TTL)@ IN NS maindns.webserver.local.@ IN NS secondarydns.webserever.local.@ IN A 10.0.0.2@ IN A 10.0.0.3maindns IN A 10.0.0.2secondarydns IN A 10.0.0.3

In the “reverse” file we define the response path:

[root@Centos7 ~]# cat /var/named/reverse.webserverlocal.db$TTL 86400@ IN SOA main.webserver.local. root.webserver.local. (2011071001 ;Serial3600 ;Refresh1800 ;Retry604800 ;Expire86400 ;Minimum TTL)@ IN NS masterdns.webserver.local.maindns IN A 10.0.0.2secondarydns IN A 10.0.0.3101 IN PTR maindns.webeserver.local.102 IN PTR secondarydns.unixmen.local.

Once all the parameters have been defined, we restart the named service with the systemctl restart named command.


In our file /etc/resolv.conf, we have to point to the IP where the DNS service runs. In the case for this example, I have located everything on the same server:


[root@Centos7 ~]# cat /etc/resolv.confnameserver 10.0.0.2

Finally, we test the name resolution via DNS:



[root@Centos7 ~]# dig maindns.webserver.local; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> maindns.webserver.local;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33754;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;maindns.webserver.local. IN A;; ANSWER SECTION:maindns.webserver.local. 86400 IN A 10.0.0.2;; AUTHORITY SECTION:webserver.local. 86400 IN NS secondarydns.webserever.local.webserver.local. 86400 IN NS maindns.webserver.local.;; Query time: 0 msec;; SERVER: 10.0.0.2#53(10.0.0.2);; WHEN: Mon Nov 12 13:27:25 CET 2018;; MSG SIZE rcvd: 120

Or like me, you are more used to the old nslookup:



[root@Centos7 ~]# nslookup maindns.webserver.localServer: 10.0.0.2Address: 10.0.0.2#53Name: maindns.webserver.localAddress: 10.0.0.2

I will also check the test WEB server that I have started on both servers:

[root@Centos7 ~]# curl -s http://maindns.webserver.local<html><body>Hola desde el Webserver 1</body></html>[root@Centos7 ~]# curl -s http://secondarydns.webserver.local<html><body>Hola desde el Webserver 2</body></html>


DNS configuration by round robin

Now we want that if the application of one server falls, the service continues to be given by the other WEB server. This configuration is called "high availability" by round robin of DNS.What I am going to do is configure the DNS so that the same name points to several different IPs. Each IP is raised on a different server (operating system), so if the "Webserver 1" drops, the service will continue to be given by the "Webserver 2".The name of the service I'm going to point to is called webservertest and I have a WEB server started on the server with IP 10.0.0.2 and the other Webserver on server 10.0.0.3.The result is as follows:


[root@Centos7 named]# curl -s http://webservertest<html><body>Hola desde el Webserver 1</body></html>[root@Centos7 named]# systemctl stop httpd[root@Centos7 named]# curl -s http://webservertest<html><body>Hola desde el Webserver 2</body></html>


As we can see, although for the Apache server with IP 10.0.0.2, the URL continues to service through Apache with IP 10.0.0.3.
To achieve this, I have configured new entries in the DNS. Let's see them:

  • File /etc/named.conf:

# webservertestzone “webservertest” IN {type master;file “forward.webservertest.db”;allow-update { none; };};zone “reverse.webservertest” IN {type master;file “reverse.webservertest.db”;allow-update { none; };};

  • File/var/named/forward.webservertest.db:

$TTL 86400@ IN SOA webservertest. root.webserver.local. (2011071001 ;Serial3600 ;Refresh1800 ;Retry604800 ;Expire86400 ;Minimum TTL)@ IN NS webservertest.@ IN NS webserevertest.@ IN A 10.0.0.2@ IN A 10.0.0.3webservertest IN A 10.0.0.2webservertest IN A 10.0.0.3

As we can see the same name services points to two diferrent IP´s.

  • File /var/named/reverse.webservertest.db:

$TTL 86400@ IN SOA webservertest. root.webservertest. (2011071001 ;Serial3600 ;Refresh1800 ;Retry604800 ;Expire86400 ;Minimum TTL)@ IN NS webservertest.webservertest IN A 10.0.0.2webservertest IN A 10.0.0.3101 IN PTR webservertest.102 IN PTR webservertest.

Thursday, April 11, 2019

kernel parameters in HP_UX


kctune: It is the administrative command for HP-UX kernel to view or change kernel parameters. The following information provides how to view or modify the kernel parameters.

 Viewing Kernel Parameters:
1
$usr/sbin/kctune

Modifying Kernel Parameters:
/usr/sbin/kctune <parameter name and it’s value>
Sample Output: 
1
2
3
4
5
6
7
8
9
10
mydb:/ #/usr/sbin/kctune hires_timeout_enable=1
     ==> Update the automatic 'backup' configuration first? yes
       * The automatic 'backup' configuration has been updated.
       * Future operations will update the backup without prompting.
        * The requested changes have been applied to the currently
         running configuration.
Tunable                         Value  Expression  Changes
hires_timeout_enable  (before)     0   Default     Immed
                       (now)       1   1
mydb:/ #

Viewing Specific Kernel Parameter:
/usr/sbin/kctune <parameter name >
Use the bellow command if you have HP_UX B.11.31 
1
2
3
4
mydb:/ #/usr/sbin/kctune hires_timeout_enable
Tunable               Value  Expression  Changes
hires_timeout_enable      1  1           Immed
mydb:/ #
Use the bellow command if you have HP_UX B.11.23
1
2
3
sun2:/home/oracle #sysdef | grep kctune hires_timeout_enable
maxuprc                    3686          -          3-                   -
sun2:/home/oracle #

Tuesday, November 20, 2018

Highly Available Clusters with kubeadm

In CentOs 7, we´ll install Kubernetes with the following command:

yum install - and kubernetes etcd

For it to work, we must have the Centos-Extras repository enabled.

Once the packages are installed we can start booting services.

Booting services in the Master

systemctl start etcd
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kube-apiserver

Start-up of the services in each of the nodes

systemctl start docker
systemctl start kube-proxy
systemctl start kubelet

you will notice that a new network interface called docker0 has been created.



POD Configuration

We will create a file in JSON format like the one below. If you remember, in the article Installation and configuration of Dockers (Containers) in Centos 7 we already downloaded the Apache container, so we will use it to configure the POD.

This time, I will start it in the local port 9090, since for the 8080 I have another service listening:

root@Centos7 kubernetes]# docker run -dit -name apachetest -p 9090:80 -v /tmp/ws/:/usr/local/apache2/htdocs/ httpd
7983a74eee23fa59abd434ad5107896e2b2a1a5b9539c5770e6d1c8549eeb060
[root@Centos7 kubernetes]#
[root@Centos7 kubernetes]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7983a74eee23 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:9090->80/tcp apachetest
[root@Centos7 kubernetes]#
[root@Centos7 kubernetes]# curl -s http://localhost:9090
<html>
<body>
I'm a container
</body>
</html>
[root@Centos7 kubernetes]#
The JSON file would be as follows:
[root@Centos7 kubernetes]# cat container-httpd-rc.json
{ "kind": "ReplicationController",
"apiVersion": "v1″,
"metadata":{ "name": "apachepod-controller" },
"spec":{
"replicas":3,
"selector":{ "name": "apachepod" },
"template:{
"metadata":{
"labels":{ "name": "apachepod" }
},
"spec":{
"containers:[ {
"name": "apachepod",
"image": "docker.io/httpd",
"ports:[ {
"containerPort":80,
"protocol": "TCP"
} ]
} ]
}
}
}
}
[root@Centos7 kubernetes]#
Next, we apply the settings:
- If we haven't previously created a key, we have to create it for the first time:
[root@Centos7 kubernetes]# openssl genrsa -out /tmp/serviceaccount.key 2048
Generating RSA private key, 2048 bit long modulus
……………………….+++
……………………………………………………+++
e is 65537 (0x10001)

We edit the file /etc/kubernetes/apiserver, adding:

KUBE_API_ARGS="-service_account_key_file=/tmp/serviceaccount.key"
We edit the file /etc/kubernetes/controller-manager, adding:
KUBE_CONTROLLER_MANAGER_ARGS="-service_account_private_key_file=/tmp/serviceaccount.key"

We restart the Kubernetes service:

root@Centos7 kubernetes]# systemctl restart etcd
[root@Centos7 kubernetes]# systemctl restart kube-controller-manager
[root@Centos7 kubernetes]# systemctl restart kube-scheduler
[root@Centos7 kubernetes]# systemctl restart kube-apiserver
[root@Centos7 kubernetes]#

- Once the key is generated, we can finally create our POD from the previously created JSON file:

[root@Centos7 kubernetes]# kubectl create -f container-httpd-rc.json
replicationcontroller "apachepod-controller" created
[root@Centos7 kubernetes]#
[root@Centos7 kubernetes]# kubectl get pod
NAME READY STATUS RESTARTS AGE
apachepod-controller-10xv7 0/1 Pending 0 50s
apachepod-controller-dx8nr 0/1 Pending 0 50s
apachepod-controller-nm5k4 0/1 Pending 0 50s
[root@Centos7 kubernetes]#
[root@Centos7 kubernetes]# kubectl get replicationcontrollers
NAME DESIRED CURRENT READY AGE
apachepod-controller 3 3 0 1m
[root@Centos7 kubernetes]#
If you wish, you can scale the number of PODs in real time:
root@Centos7 kubernetes]# kubectl scale rc apachepod-controller -replicas=4
replicationcontroller "apachepod-controller" scaled
[root@Centos7 kubernetes]# kubectl get replicationcontrollers
NAME DESIRED CURRENT READY AGE
apachepod-controller 4 4 0 2m
[root@Centos7 kubernetes]# kubectl get pod
NAME READY STATUS RESTARTS AGE
apachepod-controller-10xv7 0/1 Pending 0 3m
apachepod-controller-dx8nr 0/1 Pending 0 3m
apachepod-controller-ksrdc 0/1 Pending 0 14s
apachepod-controller-nm5k4 0/1 Pending 0 3m
[root@Centos7 kubernetes]#

Creation of the nodes that will form part of the kubernetes cluster:

[root@Centos7 kubernetes]# cat nodes.json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.0.0.2",
"labels: {
"environment: production,
"name": "kubernete1"
}
}
}
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.0.0.3",
"labels: {
"environment: production,
"name": "kubernete2"
}
}
}
[root@Centos7 kubernetes]#
[root@Centos7 kubernetes]# kubectl create -f nodes.json
node "10.0.0.2" created
node "10.0.0.3" created
[root@Centos7 kubernetes]#