Compare commits

...

2 Commits

Author SHA1 Message Date
Gharke 0a9f268917 no idea what I changed 6 months ago
Gharke 565192a1e9 more linux docs 6 months ago
  1. 4
      .obsidian/app.json
  2. 82
      .obsidian/workspace.json
  3. BIN
      DALL·E 2024-05-09 14.05.14 - A steampunk style thumbnail for a blog post featuring an AI brain made of gears and circuits. The image combines mechanical elements with electronic c.png
  4. 73
      Finished blog post from ChatGPT.md
  5. 145
      Generative AI/How to Run LLama3 Locally with Ollama and OpenwebUI.md
  6. 3
      Generative AI/Setup Local LLM.md
  7. 36
      Linux/Install Apache web server.md
  8. 66
      Linux/Install MySQL on Ubuntu.md
  9. 62
      Linux/Install PHP on Apache.md
  10. 0
      Linux/Install Ubuntu Linux.md
  11. 30
      Linux/Install Wordpress Behind an NPM Reverse Proxy.md
  12. 6
      Linux/Install phpMyAdmin.md
  13. 4
      Linux/Other Linux based servers.md
  14. 62
      Linux/Virtual Web Sites in Apache.md
  15. 27
      Linux/linux-commands.md
  16. BIN
      Networking/FortiNet/Pasted image 20240503073354.png
  17. BIN
      Networking/FortiNet/Pasted image 20240503073435.png
  18. BIN
      Networking/FortiNet/Pasted image 20240503073452.png
  19. BIN
      Networking/FortiNet/Pasted image 20240503075049.png
  20. BIN
      Networking/FortiNet/Pasted image 20240503075516.png
  21. BIN
      Networking/FortiNet/Pasted image 20240503075922.png
  22. BIN
      Networking/FortiNet/Pasted image 20240503080006.png
  23. BIN
      Networking/FortiNet/Pasted image 20240503080426.png
  24. BIN
      Networking/FortiNet/Pasted image 20240503080438.png
  25. BIN
      Networking/FortiNet/Pasted image 20240503080751.png
  26. BIN
      Networking/FortiNet/Pasted image 20240503081047.png
  27. BIN
      Networking/FortiNet/Pasted image 20240503081230.png
  28. BIN
      Networking/FortiNet/Pasted image 20240503081241.png
  29. BIN
      Networking/FortiNet/Pasted image 20240503081344.png
  30. 16
      Networking/FortiNet/Self Hosting.md
  31. 2
      Networking/NGINX Proxy Manager.md
  32. 6
      Wordpress.md

4
.obsidian/app.json

@ -1 +1,3 @@
{}
{
"alwaysUpdateLinks": true
}

82
.obsidian/workspace.json

@ -11,11 +11,40 @@
"id": "5d2a67ec854cad2d",
"type": "leaf",
"state": {
"type": "empty",
"state": {}
"type": "markdown",
"state": {
"file": "Linux/Install Apache web server.md",
"mode": "source",
"source": false
}
}
]
},
{
"id": "63195de13aaefa15",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Linux/Install Wordpress Behind an NPM Reverse Proxy.md",
"mode": "source",
"source": false
}
}
},
{
"id": "d54e3308dbfb89cb",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Linux/Other Linux based servers.md",
"mode": "source",
"source": false
}
}
}
],
"currentTab": 2
}
],
"direction": "vertical"
@ -81,6 +110,7 @@
"state": {
"type": "backlink",
"state": {
"file": "Linux/Other Linux based servers.md",
"collapseAll": false,
"extraContext": false,
"sortOrder": "alphabetical",
@ -97,6 +127,7 @@
"state": {
"type": "outgoing-link",
"state": {
"file": "Linux/Other Linux based servers.md",
"linksCollapsed": false,
"unlinkedCollapsed": true
}
@ -118,7 +149,9 @@
"type": "leaf",
"state": {
"type": "outline",
"state": {}
"state": {
"file": "Linux/Other Linux based servers.md"
}
}
}
]
@ -138,23 +171,46 @@
"command-palette:Open command palette": false
}
},
"active": "5d2a67ec854cad2d",
"active": "d54e3308dbfb89cb",
"lastOpenFiles": [
"Generative AI/How to Run LLama3 Locally with Ollama and OpenwebUI.md",
"Linux/Install Apache web server.md",
"Linux/Install Wordpress Behind an NPM Reverse Proxy.md",
"Linux/Install PHP on Apache.md",
"Linux/Install phpMyAdmin.md",
"Linux/Install Ubuntu Linux.md",
"Linux/Virtual Web Sites in Apache.md",
"Networking/FortiNet/Pasted image 20240503073354.png",
"Linux/Other Linux based servers.md",
"Docker/docker.md",
"Generative AI/Speech to Text with Whisper AI.md",
"Generative AI/Setup Local LLM.md",
"Finished blog post from ChatGPT.md",
"DALL·E 2024-05-09 14.05.14 - A steampunk style thumbnail for a blog post featuring an AI brain made of gears and circuits. The image combines mechanical elements with electronic c.png",
"Linux/linux-commands.md",
"Linux/ubuntu-server.md",
"Linux/Stand-up a Linux Server.md",
"Linux/Other Linux based servers.md",
"Linux/Install Docker on Ubuntu.md",
"Linux/Install MySQL on Ubuntu.md",
"Linux/ubuntu-server.md",
"Networking/Self Hosting Web Sites.md",
"Networking/NGINX Proxy Manager.md",
"Linux/ssh.md",
"Generative AI/Speech to Text with Whisper AI.md",
"Generative AI/Setup Local LLM.md",
"Generative AI/Stable Diffusion WebUI AUTOMATIC1111 A Beginner's Guide - Stable Diffusion Art.md",
"Generative AI/How to Run LLama3 Locally with Ollama and OpenwebUI.md",
"Generative AI/How to get prompts from images for Stable Diffusion - Stable Diffusion Art.md",
"README.md",
"Wordpress.md",
"Generative AI/FAQ - Tips & Tricks - Automatic1111 (SD 1.5) Civitai.md",
"Generative AI/A curated list of Stable Diffusion Tips, Tricks, and Guides Civitai.md",
"Linux/ssh.md",
"Networking/FortiNet/Pasted image 20240503081344.png",
"Networking/FortiNet/Pasted image 20240503073452.png",
"Networking/FortiNet/Pasted image 20240503073435.png",
"Networking/FortiNet/Pasted image 20240503081230.png",
"Networking/FortiNet/Pasted image 20240503081241.png",
"Networking/FortiNet/Pasted image 20240503081047.png",
"Networking/FortiNet/Pasted image 20240503080751.png",
"Networking/FortiNet/Pasted image 20240503080438.png",
"Networking/FortiNet",
"Generative AI/Setup Local LLM",
"Generative AI",
"Networking/Self Hosting Web Sites.md",
"Networking"
]
}

BIN
DALL·E 2024-05-09 14.05.14 - A steampunk style thumbnail for a blog post featuring an AI brain made of gears and circuits. The image combines mechanical elements with electronic c.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

73
Finished blog post from ChatGPT.md

@ -0,0 +1,73 @@
![[DALL·E 2024-05-09 14.05.14 - A steampunk style thumbnail for a blog post featuring an AI brain made of gears and circuits. The image combines mechanical elements with electronic c.png]]
## **Demystifying ChatGPT: A Guide for Managers**
In the bustling world of business technology, artificial intelligence (AI) stands out as a frontier of both opportunity and mystique. While AI promises revolutionary changes, it often evokes a sense of fear or awe—as if it were magic. However, AI is far from arcane; it's a practical tool built on understandable principles. This post aims to demystify one of the most talked-about [AI technologies today: ChatGPT](https://www.openai.com/chatgpt).
### **Understanding AI and Machine Learning**
**Artificial intelligence** is a broad field focused on creating machines that can perform tasks typically requiring human intelligence. These include interpreting spoken language, recognizing patterns, or making decisions. **Machine learning** is a subset of AI that enables computers to learn from data and improve over time without being explicitly programmed for every task.
Data is the cornerstone of AI. By feeding a machine learning model large amounts of data, it learns to recognize patterns and make predictions. The quality and quantity of this data significantly influence the accuracy and reliability of AI applications.
**Visual Aid**: An infographic here explains the AI learning process through machine learning with simple icons and short text.
### **Introduction to Natural Language Processing (NLP)**
At the heart of ChatGPT lies **Natural Language Processing** or [NLP](https://en.wikipedia.org/wiki/Natural_language_processing). NLP is a domain of AI that equips computers to understand, interpret, and generate human language. From voice-activated GPS systems to language translation services, NLP powers numerous tools we use daily.
### **The Backbone of ChatGPT: Transformer Models**
ChatGPT is built on a type of model known as a **transformer model**. These models are designed to handle sequences of data, such as sentences, and are exceptionally good at dealing with all parts of a sentence, regardless of length. Transformers use what's called an 'attention mechanism' that helps the model determine which words in a sentence are most important, thus better understanding the context.
**Diagram**: A simple diagram shows how the transformer model processes input to generate output, enhancing understanding of this complex mechanism.
### **The Training of ChatGPT**
ChatGPT was trained using a method called **supervised learning**, where the model learns by example. It was fed a vast array of text from books, websites, and other media to learn how humans use language. This training helps ChatGPT predict what text should come next in a conversation, making its responses seem surprisingly human-like.
**Case Study**: Highlight a real-world example where a company used ChatGPT to automate customer service, showing the before and after scenario regarding response times and customer satisfaction.
### **How ChatGPT Understands and Generates Language**
ChatGPT breaks down the text into smaller units, or **tokens**, which represent commonly occurring sequences of characters or words. This process, called **tokenization**, helps the model efficiently process large amounts of text. When generating responses, ChatGPT uses the context provided by these tokens to produce coherent and contextually appropriate language.
### **Practical Applications of ChatGPT**
Imagine a customer service chatbot powered by ChatGPT that can handle inquiries 24/7, providing timely and accurate responses, or an analysis tool that sifts through thousands of product reviews, summarizing customer sentiments. These applications are not just possible; they are already in use, demonstrating how AI can significantly enhance business efficiency and customer satisfaction.
### **Addressing Ethical Considerations**
Despite its vast potential, AI like ChatGPT isn't without challenges. Issues of **bias and fairness** often arise, as the model's outputs depend heavily on its training data. It's crucial for businesses to implement AI responsibly, ensuring that the systems are transparent and the data used for training is as unbiased as possible.
### **Overcoming Fear of AI**
Understanding that AI operates based on data and predefined rules can help demystify its functionality and reduce fears around its use. By becoming knowledgeable about AI, managers can better harness its capabilities to improve decision-making and operational efficiencies in their businesses.
**Interactive Element**: A short quiz at the end of this section tests readers' understanding of AI basics, with instant feedback provided.
### **Conclusion**
AI technologies, particularly ChatGPT, are powerful tools that, when understood and applied judiciously, can transform aspects of business management and operations. As we've seen, the magic of AI is not in mystical powers but in its practical application, built on solid, understandable principles.
### **Additional Resources**
For those interested in diving deeper into the world of AI and ChatGPT, a wealth of resources is available online. Engaging with these materials can provide a more thorough understanding and greater comfort with these transformative technologies.
**Feedback and Further Engagement**: We encourage you to leave comments below or reach out with specific questions about AI implementation in your operations. This feedback will help guide our future content and ensure we are meeting your needs.
### **FAQ Section**
**Q1: What are the first steps to implementing AI like ChatGPT in my business?**
- Start by identifying areas where AI can automate routine tasks or enhance decision-making. Consult with AI experts to understand the best AI solutions for these needs.
**Q2: How can I ensure the AI system is used ethically in my business?**
- Focus on transparency, use unbiased training data, and regularly review AI decisions for fairness and accuracy.
**Q3: What should I do if the AI model does not perform as expected?**
- AI models often require adjustments and retraining. Ensure you have the right support and continuously monitor performance to make necessary improvements.
### **Call to Action**
As managers and leaders, embracing AI technologies like ChatGPT can lead to significant benefits. We invite you to explore these possibilities and start a conversation within your organization about how AI can be integrated into your business processes. For hands-on guidance and more detailed discussions, consider attending one of our upcoming webinars on AI implementation.
**Webinar Link**: [Join Our AI Implementation Webinar](#)

145
Generative AI/How to Run LLama3 Locally with Ollama and OpenwebUI.md

@ -1,145 +0,0 @@
I’m a big fan of Llama. Meta releasing their LLM open source is a net benefit for the tech community at large, and their permissive license allows most medium and small businesses to use their LLMs with little to no restrictions (within the bounds of the law, of course). Their latest release is Llama 3, which has been highly anticipated.
Llama 3 comes in two sizes: 8 billion and 70 billion parameters. This kind of model is trained on a massive amount of text data and can be used for a variety of tasks, including generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. Meta touts Llama 3 as one of the best open models available, but it is still under development. Here’s the 8B model benchmarks when compared to Mistral and Gemma (according to Meta).
[![Benchmarks](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax9r9z2w2zghv81grbh7.png)](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax9r9z2w2zghv81grbh7.png)
This begs the question: how can I, the regular individual, run these models locally on my computer?
## [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#getting-started-with-ollama)Getting Started with Ollama
That’s where [Ollama](https://ollama.com/) comes in! Ollama is a free and open-source application that allows you to run various large language models, including Llama 3, on your own computer, even with limited resources. Ollama takes advantage of the performance gains of llama.cpp, an open source library designed to allow you to run LLMs locally with relatively low hardware requirements. It also includes a sort of package manager, allowing you to download and use LLMs quickly and effectively with just a single command.
The first step is [installing Ollama](https://ollama.com/download). It supports all 3 of the major OSes, with [Windows being a “preview”](https://ollama.com/blog/windows-preview) (nicer word for beta).
Once this is installed, open up your terminal. On all platforms, the command is the same.
```
ollama run llama3
```
Wait a few minutes while it downloads and loads the model, and then start chatting! It should bring you to a chat prompt similar to this one.
```
ollama run llama3
>>> Who was the second president of the united states?
The second President of the United States was John Adams. He served from 1797 to 1801, succeeding
George Washington and being succeeded by Thomas Jefferson.
>>> Who was the 30th?
The 30th President of the United States was Calvin Coolidge! He served from August 2, 1923, to March 4,
1929.
>>> /bye
```
You can chat all day within this terminal chat, but what if you want something more ChatGPT-like?
## [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#open-webui)Open WebUI
Open WebUI is an extensible, self-hosted UI that runs entirely inside of [Docker](https://docs.docker.com/desktop/). It can be used either with Ollama or other OpenAI compatible LLMs, like LiteLLM or my own [OpenAI API for Cloudflare Workers](https://github.com/chand1012/openai-cf-workers-ai).
Assuming you already have [Docker](https://docs.docker.com/desktop/) and Ollama running on your computer, [installation](https://docs.openwebui.com/getting-started/#quick-start-with-docker-) is super simple.
```
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
The simply go to [http://localhost:3000](http://localhost:3000/), make an account, and start chatting away!
[![OpenWebUI Example](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdi1d35zh09s78o8vqvb.png)](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdi1d35zh09s78o8vqvb.png)
If you didn’t run Llama 3 earlier, you’ll have to pull some models down before you can start chatting. Easiest way to do this is to click the settings icon after clicking your name in the bottom left.
[![Settings](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqyetksyn0y4a0p12ylu.png)](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqyetksyn0y4a0p12ylu.png)
Then clicking on “models” on the left side of the modal, then pasting in a name of a model from the [Ollama registry](https://ollama.com/models). Here are some models that I’ve used that I recommend for general purposes.
- `llama3`
- `mistral`
- `llama2`
[![Models Setting Page](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxc581jf4w3xszymjfbg.png)](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxc581jf4w3xszymjfbg.png)
## [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#ollama-api)Ollama API
If you want to integrate Ollama into your own projects, Ollama offers both its own API as well as an OpenAI Compatible API. The APIs automatically load a locally held LLM into memory, run the inference, then unload after a certain timeout. You do have to pull whatever models you want to use before you can run the model via the API, which can easily be done via the command line.
```
ollama pull mistral
```
### [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#ollama-api)Ollama API
Ollama has their own API available, which also has a [couple of SDKs](https://github.com/ollama/ollama?tab=readme-ov-file#libraries) for Javascript and Python.
Here is how you can do a simple text generation inference with the API.
```
curl http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt":"Why is the sky blue?"
}'
```
And here’s how you can do a Chat generation inference with the API.
```
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
```
Replace the `model` parameter with whatever model you want to use. See the [official API docs](https://github.com/ollama/ollama/blob/main/docs/api.md) for more information.
### [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#openai-compatible-api)OpenAI Compatible API
You can also use Ollama as a drop in replacement (depending on use case) with the OpenAI libraries. Here’s an example from [their documentation](https://github.com/ollama/ollama/blob/main/docs/openai.md).
```
# Python
from openai import OpenAI
client = OpenAI(
base_url='http://localhost:11434/v1/',
# required but ignored
api_key='ollama',
)
chat_completion = client.chat.completions.create(
messages=[
{
'role': 'user',
'content': 'Say this is a test',
}
],
model='mistral',
)
```
This also works for Javascript.
```
// Javascript
import OpenAI from 'openai'
const openai = new OpenAI({
baseURL: 'http://localhost:11434/v1/',
// required but ignored
apiKey: 'ollama',
})
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama2',
})
```
## [](https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d#conclusion)Conclusion
The release of Meta's Llama 3 and the open-sourcing of its Large Language Model (LLM) technology mark a major milestone for the tech community. With these advanced models now accessible through local tools like Ollama and Open WebUI, ordinary individuals can tap into their immense potential to generate text, translate languages, craft creative writing, and more. Furthermore, the availability of APIs enables developers to seamlessly integrate LLMs into new projects or enhance existing ones. Ultimately, the democratization of LLM technology through open-source initiatives like Llama 3 unlocks a vast realm of innovative possibilities and fuels creativity in the tech industry.

3
Generative AI/Setup Local LLM.md

@ -1,9 +1,8 @@
## Prereqs
1.
2. Docker, [Install Docker Desktop on Windows | Docker Docs](https://docs.docker.com/desktop/install/windows-install/)
1. [Install WSL | Microsoft Learn](https://learn.microsoft.com/en-us/windows/wsl/install)
2. Docker, [Install Docker Desktop on Windows | Docker Docs](https://docs.docker.com/desktop/install/windows-install/)
3. Ollama [Download Ollama on Windows](https://ollama.com/download)

36
Linux/Install Apache web server.md

@ -0,0 +1,36 @@
## Install Apache Web Server on Ubuntu 
```
$ sudo apt install apache2  
$ sudo systemctl enable apache2  
$ sudo systemctl start apache2  
```
Test the Apache Web Server installation 
From the server 
If you are logged into the server either directly or from an RDP connection, you can test the Apache installation following these steps. 
Open a web browser on the server and go to:[http://localhost](http://localhost/) 
You should see the Apache2 Ubuntu Default Page that says "It works" 
From a computer on the network 
If you want to test from another computer that is on the same network as the Apache server, you will need some information. Specifically the IP address of the server. 
To get the IP address of the Apache server, login to the server and enter the following command in the terminal: 
$ sudo hostname -I  
That is a capital I and lower case i will tell you the local IP address which is almost always 127.0.0.1 
From the computer on the network, open a browser window and enter the IP address as the web page URL: [http://ipaddress](http://ipaddress/) For example on mine its [http://192.168.1.12](http://192.168.1.12/) 
You should see the same default It works page 
The Apache Default Web Site Directory 
The default web site in Apache on Ubuntu is at /var/www/html. If you try to edit or add to this site you will get permissions errors because the folders and files are owned by root. The temptation is to change the ownership of everything to your account and go on from there. While this may be acceptable for a test or dev server, it is a bad habit to get into. Rather use the following procedure to setup virtual web sites in your apache that you will then edit and leave the default web site alone.

66
Linux/Install MySQL on Ubuntu.md

@ -0,0 +1,66 @@
## Install MySQL on Ubuntu 
```
$ sudo apt install mysql-server   
```
## Enable MySQL on Ubuntu 
This step will add MySQL to the system startup so that MySQL will start every time that the computer starts. 
```
$ sudo systemctl enable mysql
```
## Secure MySQL (For Production) 
It is imperative to "button up" your installation of MySQL. This prevents hackers from getting into your databases. 
```
$ sudo mysql_secure_installation  
$ Press y|Y for Yes, any other key for No: y  
$ Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1  
$ Remove anonymous users? Y  
$ Disallow root login remotely? Y  
$ Remove test database? Y  
```
## Log in to MySQL 
```
$sudo mysql -u root -p  
```
Create a new MySQL user because using root is BAD! 
```
CREATE USER 'sqladmin'@'localhost' IDENTIFIED BY 'password'; 
GRANT ALL PRIVILEGES ON *.* TO 'sqladmin'@'localhost' WITH GRANT OPTION; 
FLUSH PRIVILEGES; 
exit;   
```
## Install MySQL Workbench on Ubuntu Desktop 
Enter the following commands in the terminal: 
$ sudo apt install mysql-workbench  
Enable local_infile for OpenEMR 
OpenEMR requires the use of local files for uploading content to database tables 
Enter the following commands in the terminal: 
$ sudo mysql -u root  mysql> SET GLOBAL local_infile = 'ON';  mysql> SHOW GLOBAL VARIABLES LIKE 'local_infile';  
Allow Remote Connections 
[https://linuxize.com/post/mysql-remote-access/](https://linuxize.com/post/mysql-remote-access/) 
[How to Allow Remote MySQL Connections (phoenixnap.com)](https://phoenixnap.com/kb/mysql-remote-connection) 
References 
[https://www.rosehosting.com/blog/how-to-install-mysql-on-ubuntu-18-04/](https://www.rosehosting.com/blog/how-to-install-mysql-on-ubuntu-18-04/) 
[https://phoensudo mysql -u root -pixnap.com/kb/install-get-started-mysql-workbench-on-ubuntu](https://phoenixnap.com/kb/install-get-started-mysql-workbench-on-ubuntu) 
[https://mi-squared.com/2019/11/05/openemr-external-data-load-query-error/](https://mi-squared.com/2019/11/05/openemr-external-data-load-query-error/)

62
Linux/Install PHP on Apache.md

@ -0,0 +1,62 @@
Install PHP on Apache Web Server in Ubuntu 
The PHP programming language is widely used for server-side web applications. Such as, Wordpress, OpenEMR, Joomla, phpBB, Magento and others. 
Enter the following commands in the terminal: 
$ sudo apt install php libapache2-mod-php   
Install essential PHP modules 
Enter the following commands in the terminal: 
$ sudo apt install php-cli  
$ sudo apt install php-cgi  
$ sudo apt install php-mysql 
$ sudo systemctl restart apache2  
Test PHP installation 
Add a web page to the Apache default web site. 
Enter the following commands in the terminal: 
$ sudo nano /var/www/html/phpinfo.php  
In the nano editor enter the following code: 
<?php phpinfo(); ?>  
Press Control + X, then press Y to save. 
Open your browser window and enter the following URL: 
[http://localhost/index.php](http://localhost/index.php)  
If successful, you will see a page that shows information about your PHP installation 
Install PHP modules required for OpenEMR 
The following modules must be installed in order for OpenEMR to work. 
Enter the following commands in the terminal: 
$ sudo apt install php-curl  $ sudo apt install php-gd  $ sudo apt install php-ldap  $ sudo apt install php-mbstring  $ sudo apt install php-soap  $ sudo apt install php-xml  $ sudo apt install php-zip  
Install mCrypt 
OpenEMR requires the use of mcrypt which is no longer installed in PHP by default. This means you have to install this manually. 
Enter the following commands in the terminal: 
# install php dev $ sudo apt install php-dev   # install build tools $ sudo apt -y install gcc make autoconf libc-dev pkg-config   # install mcrypt dev $ sudo apt -y install libmcrypt-dev   # build and install mcrypt $ sudo pecl install mcrypt-1.0.3   # configure php to use mcrypt $sudo bash -c "echo extension=/usr/lib/php/20190902/mcrypt.so > /etc/php/7.4/cli/conf.d/mcrypt.ini"  $sudo bash -c "echo extension=/usr/lib/php/20190902/mcrypt.so > /etc/php/7.4/apache2/conf.d/mcrypt.ini"   # test mcrypt $ sudo php -i | grep "mcrypt"  
OpenEMR Architecture 
This article shows all of the PHP and other dependencies that OpenEMR needs to operate properly. The installation process of OpenEMR assumes that you have already installed all of these modules. If you followed the runbooks in this section then you have but this is good reference. 
[https://www.open-emr.org/wiki/index.php/OpenEMR_System_Architecture#OpenEMR_Dependencies](https://www.open-emr.org/wiki/index.php/OpenEMR_System_Architecture#OpenEMR_Dependencies) 
To see the installed PHP modules enter the following command in terminal: 
$ sudo php -m

0
Linux/Stand-up a Linux Server.md → Linux/Install Ubuntu Linux.md

30
Linux/Install Wordpress Behind an NPM Reverse Proxy.md

@ -0,0 +1,30 @@
[Running WordPress Behind SSL and NGINX Reverse Proxy | ldev blog](https://blog.ldev.app/running-wordpress-behind-ssl-and-nginx-reverse-proxy/)
> If you are behind a reverse proxy that does SSL/TLS for you, or in a similar situation, wordpress needs to know that this is the case (otherwise it will assume unencrypted http and will make all links to references unencrypted). If http gets redirected to https this can cause problems.
>
> You can resolve this by configuring the webserver to add certain headers, e.g. for Nginx inside the server block, add:
>
> ```bash
> location /blog/ {
> proxy_pass http://backend:8081/;
> proxy_set_header X-Forwarded-Host $host;
> proxy_set_header X-Forwarded-Proto $scheme;
> }
> ```
>
> and in the wp-config.php (usually created after installation, so you'll have to do the install without styling):
>
> ```php
> /**
> * Handle SSL reverse proxy
> */
> if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
> $_SERVER['HTTPS']='on';
>
> if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
> $_SERVER['HTTP_HOST'] = $_SERVER['HTTP_X_FORWARDED_HOST'];
> }
> ```
>
> I had the same problem and found this out here: [https://www.variantweb.net/blog/wordpress-behind-an-nginx-ssl-reverse-proxy/](https://www.variantweb.net/blog/wordpress-behind-an-nginx-ssl-reverse-proxy/)

6
Linux/Install phpMyAdmin.md

@ -0,0 +1,6 @@
[How to install and configure phpMyAdmin | Ubuntu](https://ubuntu.com/server/docs/how-to-install-and-configure-phpmyadmin)
```
sudo apt install phpmyadmin
```

4
Linux/Other Linux based servers.md

@ -130,3 +130,7 @@ sudo nano /etc/gitea/app.ini
```
Note: I put Gitea behind Nginx Proxy where it was listening to port 3000 and redirecting to 443 for SSL with a cert. The problem was that the URLs of the repos still had port 3000 so I would get command line errors when working with repos.
The fix was to edit the /etc/gitea/app.ini file and change the base URL to a normal port 80 URL. e.g., I deleted the :3000 from the base url.

62
Linux/Virtual Web Sites in Apache.md

@ -0,0 +1,62 @@
## Introduction 
Don't drop virtual sites into /var/www/html. You could end up with a situation where your virtual site is accessible via the default web site AND your virtual definition. It's just better to put your virtual sites into their own folders.
## Virtual Web Sites 
Virtual Web Sites are the mechanism that allows you to host more than one domain on a single Apache web server. For example, using virtual sites I can host [http://www.kenschae.com](http://www.kenschae.com/)in one folder and [http://www.mygreatsite.com](http://www.mygreatsite.com/)in another folder. This logic applies to hosting multiple hosts under a single domain. For example, I can host [http://intranet.kenschae.com](http://intranet.kenschae.com/)in one folder and [http://blog.kenschae.com](http://blog.kenschae.com/)in another. Finally, you can use the same approach to host unregistered domains that are only visible on your local network (if you have a DNS server). For example, I can host [http://dev.internal.com](http://dev.internal.com/)in one folder and [http://intranet.internal.com](http://intranet.internal.com/)in another and [http://blog.fakedomain.com](http://blog.fakedomain.com/)in yet another. 
For consistency and maintenance, you should have some kind of naming scheme or tracker to keep it all straight. 
## Setup a virtual web site 
The example below will create a new virtual web site at site.localwhen creating your own virtual site replace the references for site.local to your host.domain combination. 
Enter the following commands in the terminal: 
```
# create the folder to host the virtual site 
$ cd /var/www 
$ sudo mkdir /var/www/site.local  
# give Apache access to the files and folders in the virtual site 
$ sudo chgrp -R www-data /var/www/site.local 
$ sudo find /var/www/site.local -type d -exec chmod g+rx {} + 
$ sudo find /var/www/site.local -type f -exec chmod g+r {} +  
# give your owner read/write permissions to the files and folders 
# replace the word USERNAME with your user account name.
$ sudo chown -R USERNAME /var/www/site.local 
$ sudo find /var/www/site.local -type d -exec chmod u+rwx {} + 
$ sudo find /var/www/site.local -type f -exec chmod u+rw {} +  
# all new files will be created with www-data as the access user [optional] 
$ sudo find /var/www/site.local -type d -exec chmod g+s {} +  
# prevent other users from seeing the data in the files and folders [optional] 
$ sudo chmod -R o-rwx /var/www/site.local  
# create a test web page for the site $ cd /var/www/site.local/public_html 
$ sudo nano index.html  
```
In the nano editor add the following code 
`<html> <head><title>Site</title></head><body><h1>Site</h1></body> </html>`  
Press Ctrl+X then Y to save the web page. 
Configure Apache to use the new site 
Enter the following commands in the terminal: 
$ cd /etc/apache2/sites-available  $ sudo cp 000-default.config site.local.conf  $ sudo nano site.local.conf  
In the nano editor modify the config file to: 
<VirtualHost *:80>     # The ServerName directive sets the request scheme, hostname and port that     # the server uses to identify itself. This is used when creating     # redirection URLs. In the context of virtual hosts, the ServerName     # specifies what hostname must appear in the request's Host: header to     # match this virtual host. For the default virtual host (this file) this     # value is not decisive as it is used as a last resort host regardless.     # However, you must set it for any further virtual host explicitly.     ServerName [s](https://www.example.com/)ite.local  ServerAlias [www.site.local](http://www.site.local/)      ServerAdmin [webmaster@site.local](mailto:webmaster@site.local)     DocumentRoot /var/www/site.local      # Available loglevels: trace8, ..., trace1, debug, info, notice, warn,     # error, crit, alert, emerg.     # It is also possible to configure the loglevel for particular     # modules, e.g.     #LogLevel info ssl:warn      ErrorLog ${APACHE_LOG_DIR}/site.local-error.log     CustomLog ${APACHE_LOG_DIR}/site.local-access.log combined      # For most configuration files from conf-available/, which are     # enabled or disabled at a global level, it is possible to     # include a line for only one particular virtual host. For example the     # following line enables the CGI configuration for this host only     # after it has been globally disabled with "a2disconf".     #Include conf-available/serve-cgi-bin.conf </VirtualHost>  
Press Ctrl+X then Y to save the config file. 
Enter the following commands in the terminal: 
$ sudo a2ensite site.local  $ sudo apachectl configtest  $ sudo systemctl restart apache2

27
Linux/linux-commands.md

@ -1,4 +1,31 @@
### Create new sudo account
Ubuntu Server does this by default
```
sudo adduser {username}
sudo usermod -aG sudo {username}
```
### Disable root
Ubuntu Server does this by default
```
sudo passwd -l root
```
### Change HOSTNAME
```
sudo hostnamectl set-hostname {hostname}
hostnamectl
sudo nano /etc/hosts
/* change hostname in hosts file */
```
### Copy Files and Folders to a Directory
```
cp -a /source/. /dest/
```
-a preserves all file attributes
. copies all files and folders including hidden
## Networking

BIN
Networking/FortiNet/Pasted image 20240503073354.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
Networking/FortiNet/Pasted image 20240503073435.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

BIN
Networking/FortiNet/Pasted image 20240503073452.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
Networking/FortiNet/Pasted image 20240503075049.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
Networking/FortiNet/Pasted image 20240503075516.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

BIN
Networking/FortiNet/Pasted image 20240503075922.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

BIN
Networking/FortiNet/Pasted image 20240503080006.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
Networking/FortiNet/Pasted image 20240503080426.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
Networking/FortiNet/Pasted image 20240503080438.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
Networking/FortiNet/Pasted image 20240503080751.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

BIN
Networking/FortiNet/Pasted image 20240503081047.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
Networking/FortiNet/Pasted image 20240503081230.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
Networking/FortiNet/Pasted image 20240503081241.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
Networking/FortiNet/Pasted image 20240503081344.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

16
Networking/FortiNet/Self Hosting.md

@ -0,0 +1,16 @@
Virtual IPs
![[Pasted image 20240503073354.png]]
IPv4 Policy
![[Pasted image 20240503073435.png]]
![[Pasted image 20240503073452.png]]

2
Networking/NGINX Proxy Manager.md

@ -8,7 +8,7 @@ Learn about the difference between forward and reverse proxies, and what you wou
## Prerequisites
[[Stand-up a Linux Server]] with Docker and Docker Compose installed
[[Install Ubuntu Linux]] with Docker and Docker Compose installed
## Software
[Nginx Proxy Manager](https://nginxproxymanager.com/)

6
Wordpress.md

@ -0,0 +1,6 @@
In order to FTP and unzip wordpress it is easiest to make my user id the owner and group of the web directory. Then I can FTP the files, unzip and edit as needed to get wordpress installed
Once I want to go to Wordpress to do work, then I need to make the owner and group www-data so that Apache can install plugins and stuff.
Loading…
Cancel
Save