Forwarded from UNDERCODE NEWS
Behind recycled lithium batteries, the profiteering industry: The recycling market alone hits 117.8 billion
#international
#international
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
Russian hackers have been in and out of the US Treasury and Commerce Departments for over a year.
#CyberAttacks
#CyberAttacks
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦βΈοΈ How to test a Kubernetes cluster for vulnerabilities?
1) ubei is a vulnerability scanning tool that allows users to get an accurate and immediate risk assessment of their Kubernetes clusters.
2) Kubei scans all images that are in use in the Kubernetes cluster, including application and system pods images.
3) It does not scan image registries and does not require prior integration with CI / CD pipelines.
4) It is a customizable tool that allows users to determine the scope of the scan (target namespaces), speed and level of interest.
5) The tool also provides a graphical interface that allows an administrator to determine where and what should be replaced in order to mitigate the impact of discovered vulnerabilities.
6) Requirements
The Kubernetes cluster is already up and running and kubeconfig (~ / .kube / config) is correctly configured on the target cluster.
And use
7) Run the following command to deploy Kubei to the cluster:
kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
8) Run the following command to make sure Kubei is up and running:
kubectl -n kubei get pod -lapp=kubei
9) Then forward the port to the Kubei web app with the following command:
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
βΈοΈ How to use port forwarding in containers deployed in a Kubernetes cluster
In your browser, go to http: // localhost: 8080 / view / and then click GO to start the scan.
To check the status of Kubei and the progress of the current scan, run the following command:
kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
Refresh the page (http: // localhost: 8080 / view /) to update the results.
]
If some pods are stuck in Waiting status, you can solve this error with:
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦βΈοΈ How to test a Kubernetes cluster for vulnerabilities?
1) ubei is a vulnerability scanning tool that allows users to get an accurate and immediate risk assessment of their Kubernetes clusters.
2) Kubei scans all images that are in use in the Kubernetes cluster, including application and system pods images.
3) It does not scan image registries and does not require prior integration with CI / CD pipelines.
4) It is a customizable tool that allows users to determine the scope of the scan (target namespaces), speed and level of interest.
5) The tool also provides a graphical interface that allows an administrator to determine where and what should be replaced in order to mitigate the impact of discovered vulnerabilities.
6) Requirements
The Kubernetes cluster is already up and running and kubeconfig (~ / .kube / config) is correctly configured on the target cluster.
And use
7) Run the following command to deploy Kubei to the cluster:
kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
8) Run the following command to make sure Kubei is up and running:
kubectl -n kubei get pod -lapp=kubei
9) Then forward the port to the Kubei web app with the following command:
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
βΈοΈ How to use port forwarding in containers deployed in a Kubernetes cluster
In your browser, go to http: // localhost: 8080 / view / and then click GO to start the scan.
To check the status of Kubei and the progress of the current scan, run the following command:
kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
Refresh the page (http: // localhost: 8080 / view /) to update the results.
]
If some pods are stuck in Waiting status, you can solve this error with:
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦ Exploring MinIO - High Performance Standalone Object Storage, S3 Compatible :
Let's take a look at some of the features worth paying attention to.
High performance! Minio is capable of reading / writing at ~ 170GB / s. It's a lot!
Scalability - use clustering and scale as needed
Cloud-native
Data protection using the Erasure code method
Multiple encryption supported, including AES-CBC, AES-256-GCM, ChaCha20
Compatible with regular KMS
Event notification
Compatible with etcd and CoreDNS
MinIO is a good choice for software-distributed storage.
Let's see how to set it up.
1) Installing the MinIO server
You can install it on Linux, Windows, macOS, and via Kubernetes.
Prefer to build from source?
Of course you can, if you have Golang installed.
2) Login to server
Create a folder on the desired file system. For example minio-server
3) Go to the newly created folder and run the below wget command.
> wget https://dl.min.io/server/minio/release/linux-amd64/minio
The system will load the binary and this file should look like this:
-rw-r--r-- 1 root root 48271360 Oct 18 21:57 minio
4) Make the file executable with chmod command
chmod 755 minio
5) Let's run MinIO as a server.
./minio server /data &
/ data , mentioned above, is the filesystem that MinIO will store objects in.
6) Startup is fast and you should see information like this:
Endpoint: http://xx.71.141.xx:9000 http://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin
7) Browser Access:
http://xx.71.141.xx:9000 http://127.0.0.1:9000
8) Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc alias set myminio http://xx.71.141.xx:9000 minioadmin minioadmin
9) Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
10) Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
11) Let's log into MinIO through a browser with the default credentials - minioadmin: minioadmin
The interface is very neat and simple, but first of all, let's change the default credentials as this puts the risk of tampering.
12) To change the default MinIO credentials, we will export the passkey and private key as shown below and run MinIO.
export MINIO_ACCESS_KEY=itsecforu
export MINIO_SECRET_KEY=itsecpassword
./minio server /data &
13) Now it shouldn't complain and issue a warning about the detection of default credentials.
Let's try to upload files.
14) Click the + icon in the bottom right corner and create a bucket
I
15) have uploaded a test file and it is immediately visible in the browser
and on the server :
ls -ltr
total 4
-rw-r--r-- 1 root root 11 Oct 19 11:09 MinIO-Test.txt
16) If you click the share button for a file in your browser, you will receive a share link and an option to set an expiration date.
MinIO client
The MinIO client is more than just aws-cli that allows you to manage storage.
17) The client is available for Windows, macOS and Linux.
To install on Linux, run the following:
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod 755 mc
Run the mc command to see the command help.
/mc
NAME:
mc - MinIO Client for cloud storage and filesystems.
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦ Exploring MinIO - High Performance Standalone Object Storage, S3 Compatible :
Let's take a look at some of the features worth paying attention to.
High performance! Minio is capable of reading / writing at ~ 170GB / s. It's a lot!
Scalability - use clustering and scale as needed
Cloud-native
Data protection using the Erasure code method
Multiple encryption supported, including AES-CBC, AES-256-GCM, ChaCha20
Compatible with regular KMS
Event notification
Compatible with etcd and CoreDNS
MinIO is a good choice for software-distributed storage.
Let's see how to set it up.
1) Installing the MinIO server
You can install it on Linux, Windows, macOS, and via Kubernetes.
Prefer to build from source?
Of course you can, if you have Golang installed.
2) Login to server
Create a folder on the desired file system. For example minio-server
3) Go to the newly created folder and run the below wget command.
> wget https://dl.min.io/server/minio/release/linux-amd64/minio
The system will load the binary and this file should look like this:
-rw-r--r-- 1 root root 48271360 Oct 18 21:57 minio
4) Make the file executable with chmod command
chmod 755 minio
5) Let's run MinIO as a server.
./minio server /data &
/ data , mentioned above, is the filesystem that MinIO will store objects in.
6) Startup is fast and you should see information like this:
Endpoint: http://xx.71.141.xx:9000 http://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin
7) Browser Access:
http://xx.71.141.xx:9000 http://127.0.0.1:9000
8) Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc alias set myminio http://xx.71.141.xx:9000 minioadmin minioadmin
9) Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
10) Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
11) Let's log into MinIO through a browser with the default credentials - minioadmin: minioadmin
The interface is very neat and simple, but first of all, let's change the default credentials as this puts the risk of tampering.
12) To change the default MinIO credentials, we will export the passkey and private key as shown below and run MinIO.
export MINIO_ACCESS_KEY=itsecforu
export MINIO_SECRET_KEY=itsecpassword
./minio server /data &
13) Now it shouldn't complain and issue a warning about the detection of default credentials.
Let's try to upload files.
14) Click the + icon in the bottom right corner and create a bucket
I
15) have uploaded a test file and it is immediately visible in the browser
and on the server :
ls -ltr
total 4
-rw-r--r-- 1 root root 11 Oct 19 11:09 MinIO-Test.txt
16) If you click the share button for a file in your browser, you will receive a share link and an option to set an expiration date.
MinIO client
The MinIO client is more than just aws-cli that allows you to manage storage.
17) The client is available for Windows, macOS and Linux.
To install on Linux, run the following:
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod 755 mc
Run the mc command to see the command help.
/mc
NAME:
mc - MinIO Client for cloud storage and filesystems.
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE NEWS
Intel-owned artificial intelligence processor company infected with Pay-to-Key ransomware.
#Technologies
#Technologies
Forwarded from UNDERCODE NEWS
The new lunar landing proposal by NASA outlines seven science priorities and intends to establish a lunar base camp.
#international
#international
β β β Uππ»βΊπ«Δπ¬πβ β β β
π How to fix the [warn] could not build optimal proxy_headers_hash error
1) How to solve the problem: βnginx: [warn] could not build optimal proxy_headers_hash, you should increase either p roxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size error β.
2) If you have an Nginx proxy that proxies multiple sites, you may encounter the error shown above in one way or another.
3) To fix the error, you will need to edit the files that serve the proxy sites.
4) EXAMPLE :
Mine were located in the /etc/nginx/sites.d/ directory.
5) In each of these files, edit your "location" part as shown below:
location ~ /.git {
deny all;
}
proxy_redirect off; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
6) You can use other values for the numbers as you wish
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
7) After you've finished editing this small part, check your Nginx configuration by running the nginx -t command.
sudo nginx -t
You should get the following output:
8) nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
WEll done!
β β β Uππ»βΊπ«Δπ¬πβ β β β
π How to fix the [warn] could not build optimal proxy_headers_hash error
1) How to solve the problem: βnginx: [warn] could not build optimal proxy_headers_hash, you should increase either p roxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size error β.
2) If you have an Nginx proxy that proxies multiple sites, you may encounter the error shown above in one way or another.
3) To fix the error, you will need to edit the files that serve the proxy sites.
4) EXAMPLE :
Mine were located in the /etc/nginx/sites.d/ directory.
5) In each of these files, edit your "location" part as shown below:
location ~ /.git {
deny all;
}
proxy_redirect off; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
6) You can use other values for the numbers as you wish
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
7) After you've finished editing this small part, check your Nginx configuration by running the nginx -t command.
sudo nginx -t
You should get the following output:
8) nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
WEll done!
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE NEWS
It's rumored that Russian hackers have infiltrated the U.S. Department of Treasury/Commerce network for email tracking.
#CyberAttacks
#CyberAttacks
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Auto schedules and executes queries :
A service that schedules and executes queries against a relational database and writes the output to files on the local filesystem or S3:
The minimum requirement is a config.yml file. The main sections of the config include:
sqlagent - The connection info to the SQL Agent service.
connections - A map of database connection info by name.
queries - An array of queries defined inline or referencing a file.
schedule - The schedule to run this set of queries.
Additional options are provided to define where files are written and their format.
D e p l o y m e n t :
1) download https://github.com/chop-dbhi/sql-extractor
The dep tool is used for managing dependencies. Install by running:
2) deploy
go get github.com/golang/dep/...
3) Then run the following to install the dependencies:
dep ensure
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Auto schedules and executes queries :
A service that schedules and executes queries against a relational database and writes the output to files on the local filesystem or S3:
The minimum requirement is a config.yml file. The main sections of the config include:
sqlagent - The connection info to the SQL Agent service.
connections - A map of database connection info by name.
queries - An array of queries defined inline or referencing a file.
schedule - The schedule to run this set of queries.
Additional options are provided to define where files are written and their format.
D e p l o y m e n t :
1) download https://github.com/chop-dbhi/sql-extractor
The dep tool is used for managing dependencies. Install by running:
2) deploy
go get github.com/golang/dep/...
3) Then run the following to install the dependencies:
dep ensure
β β β Uππ»βΊπ«Δπ¬πβ β β β
GitHub
GitHub - chop-dbhi/sql-extractor
Contribute to chop-dbhi/sql-extractor development by creating an account on GitHub.
Forwarded from UNDERCODE NEWS
Forwarded from UNDERCODE NEWS