TITLE: PowerShell: Perceptron Demo - No Math AUTHOR: Eddie S. Jackson, MrNetTek DATE: September 18, 2024 at 7:08 AM EST RESEARCH: Google search, current news, books, ChatGPT-4o. EDITING: Grammarly

**What’s a Perceptron?**

A Perceptron is like a really simple, somewhat moody robot trying to make decisions based on what you feed it—kind of like your friend who can’t decide what to eat, but has many ideas. What a minute, I’m usually that friend. **🙂**

Imagine our moody little robot sitting at a table with two buttons: One button says **YES** and another button says **NO**. We give it some snacks (inputs), and it weighs them against its own taste preferences (weights) and a satisfactory or grumpiness factor (bias).

So, if we give it a piece of broccoli (inputs) and it remembers that it once had a bad experience with broccoli (*negative* weight), it might lean toward the “NO” button. But if you offer it a cookie (*positive* weight), it might hit the “YES” button. That is core concept of how a Perceptron works.

In this article, I provide a demo of a simple Perceptron model implemented in PowerShell. I like using PowerShell, because it requires very little programming expertise, so almost anyone can test this code in their own lab or on their own computer (without too much pre-setup). I will post a C# version for the developers out there. The model in our script utilizes *two* neurons, each with its own set of weights and bias values, to process input data through weighted summation and activation functions. The output of each neuron indicates whether it has “activated” (or fired) based on the input provided.

What are Perceptrons even used for? They are utilized for various tasks in machine learning and artificial intelligence, such as pattern recognition, binary classification, decision-making, and, of course, as foundational components of neural networks, to name a few. The concept of a perceptron isn’t overly difficult to understand, but its implementation has far-reaching implications. Perceptrons make decisions not based on a simple yes or no—like a common form of conditional logic in programming—but based on weighted values and biases. This allows for a kind of adaptable decision-making that evolves as the input values change. This adaptability is how AI can learn. Is this like human learning? Well, sort of, but not really. Perceptrons are mechanical and simplistic in nature. While they mimic some aspects of human learning, human learning is much more complex and difficult to quantify.

Note, I have intentionally not added the mathematics and theory for this demo (it confuses a lot of people when I start talking about linear separability). When I post the C# demo, I will include the math and theory.

Let’s look at some code first, and then review the code summary.

**Code.ps1**

```
# MrNetTek
# eddiejackson.net
# 9/18/2024
# free for public use
# free to claim as your own
# Inputs
$inputs = @(0.5, -1.2)
# Weights for neurons
$weightsNeuron1 = @(0.3, -0.8)
$weightsNeuron2 = @(0.7, 0.5)
# Bias for neurons
$biasNeuron1 = 0.2
$biasNeuron2 = -0.1
######################################
# Weighted Sum Function: Calculate
function Calculate-WeightedSum {
param (
[float[]]$inputs,
[float[]]$weights,
[float]$bias
)
$sum = 0
for ($i = 0; $i -lt $inputs.Count; $i++) {
$sum += $inputs[$i] * $weights[$i]
}
return $sum + $bias
}
# Activation Function
# Type: Step
function Step-Activation {
param (
[float]$weightedSum
)
if ($weightedSum -ge 0) {
return 1
} else {
return 0
}
}
# Calculate the weighted sum for both neurons
$weightedSumNeuron1 = Calculate-WeightedSum -inputs $inputs -weights $weightsNeuron1 -bias $biasNeuron1
$weightedSumNeuron2 = Calculate-WeightedSum -inputs $inputs -weights $weightsNeuron2 -bias $biasNeuron2
# Return output of both neurons using the activation function
$outputNeuron1 = Step-Activation -weightedSum $weightedSumNeuron1
$outputNeuron2 = Step-Activation -weightedSum $weightedSumNeuron2
Clear-Host
# Display Outputs
Write-Host "Perceptron Intelligence with 2 Neurons"
Write-Host "----------------------------------------`n"
Write-Output "Inputs: $inputs"
Write-Output "Neuron 1 - Weights: $weightsNeuron1, Bias: $biasNeuron1"
Write-Output "Neuron 2 - Weights: $weightsNeuron2, Bias: $biasNeuron2"
Write-Output "`nNeuron 1:"
Write-Output "Weighted Sum: $weightedSumNeuron1"
Write-Output "Output: $outputNeuron1`n"
Write-Output "Neuron 2:"
Write-Output "Weighted Sum: $weightedSumNeuron2"
Write-Output "Output: $outputNeuron2`n"
```

Output

**Code Summary**

**Components: **Inputs, Neurons, Bias, Weights, Functions, Output

**Inputs**

The Perceptron model accepts the following input vector:

**Inputs: **0.5, -1.2

These values are fed into the neurons for processing.

**Neurons**

Two neurons are defined in the model, each equipped with distinct weights and biases. These are important because they provide a method for flexibility, i.e., not static, not pre-programmed.

**[Neuron 1]**

**Weights:**0.3, -0.8**Bias:**0.2

**[Neuron 2]**

**Weights:**0.7, 0.5**Bias:**-0.1

**Functions**

**Bias & Weighted Sum: Calculate-WeightedSum**

**Purpose:** This function computes the weighted sum of the inputs for a given neuron, including its bias.

**Parameters:**

- inputs: An array of float values representing the input features.
- weights: An array of float values representing the neuron’s weights.
- bias: A float value representing the neuron’s bias.

Process:

- Initialize a sum variable to zero.
- Iterate through each input, multiplying it by the corresponding weight and accumulating the result.
- Add the bias to the accumulated sum.
- Return the final weighted sum.

**Step-Activation**

**Purpose:** This function determines the output of a neuron based on the weighted sum calculated previously.

**Parameters:**

- weightedSum: A float value representing the weighted sum input to the activation function.

Process:

If the weighted sum is greater than or equal to zero, the function returns 1 (indicating activation).

Otherwise, it returns 0 (indicating no activation).

**Execution Flow**

- The weighted sum for Neuron 1 is calculated using the Calculate-WeightedSum function with its specific weights and bias.
- Similarly, the weighted sum for Neuron 2 is computed.
- Each neuron’s weighted sum is then passed through the Step-Activation function to determine its output.
- Finally, the results are displayed, including the input values, weights, biases, weighted sums, and the activation outputs for both neurons.

**Output**

When you run the code, this output is returned:

- Inputs: Displays the input vector.
- Neuron 1: Shows the weights, bias, weighted sum, and output.
- Neuron 2: Displays the same details as for Neuron 1.

**Notes**

Two-Class Averaged Perceptron component (Microsoft)

Microsoft Neural Network Algorithm Technical Reference (Microsoft)

*Directly from AI (I thought some people may find this interesting):*

**How does Perceptron learning compare to how humans learn?**

ChatGPT said:

The way perceptrons and humans learn share some similarities but also exhibit significant differences. Here’s a comparison:

Similarities

Learning from Experience:

Both humans and perceptrons improve their ability to classify or make decisions based on past experiences. For instance, humans learn from feedback about their decisions, much like how a perceptron adjusts its weights based on errors.

Pattern Recognition:

Humans recognize patterns in data, such as shapes or colors, similar to how a perceptron learns to distinguish between different classes based on input features.

Differences

Complexity of Learning:

Perceptron: A perceptron learns only linear relationships and struggles with more complex, non-linear data. It requires a straightforward separation, limiting its ability to understand intricate patterns.

Humans: Humans can grasp complex, non-linear relationships and can make sense of ambiguous or overlapping data. For example, we can distinguish between objects even if they share similar features or are presented in varied contexts.

Flexibility:

Perceptron: Once trained, a perceptron is rigid; it can only apply what it has learned to similar data. If faced with new, unseen patterns that don’t fit the learned linear boundary, it may fail.

Humans: Humans are adaptable learners. We can generalize knowledge from one context to another, recognize exceptions, and apply reasoning to unfamiliar situations.

Learning Mechanisms:

Perceptron: Learning in a perceptron is often a mechanical process involving adjustments to weights based on a defined algorithm (like gradient descent).

Humans: Human learning is influenced by emotions, social interactions, and cognitive processes, which can affect motivation, attention, and retention.

Data Representation:

Perceptron: Inputs to a perceptron are typically numerical features that require clear representation.

Humans: We process sensory information (visual, auditory, etc.) in a rich and nuanced way, incorporating context and experience to inform our understanding.

Tags: Eddie Jackson Artificial Intelligence, MrNetTek, Eddie Jackson Computer, AI

#ArtificialIntelligence #MrNetTek