You are reading the article Hyperparameter Tuning Of Neural Networks Using Keras Tuner updated in December 2023 on the website Achiashop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Hyperparameter Tuning Of Neural Networks Using Keras Tuner
This article was published as a part of the Data Science Blogathon
IntroductionIn neural networks we have lots of hyperparameters, it is very hard to tune the hyperparameter manually. So, we have Keras Tuner which makes it very simple to tune our hyperparameters of neural networks. It is just like that Grid Search or Randomized Search that you have seen in machine learning.
In this article, you will learn about How to tune your hyperparameters of a neural network using Keras Tuner, we will start with a very simple neural network and then we will do hyperparameter tuning and compare the results. You will learn about everything you need to know about Keras Tuner.
Developing deep learning models is an iterative process, You start with an initial architecture then reconfigure until you get a model that can be trained efficiently in terms of time and compute resources.
These settings that you adjust are called hyperparameters, you get the idea, you write code and see the performance, and again you to the same process until you have good performance.
So, there is a way where you can adjust the setting of your neural networks which is called hyperparameters and the process of finding a good set of hyperparameters is called hyperparameter tuning.
Hyperparameter tuning is a very important part of the building, if not done, then it might cause major problems in your model like taking lots of time, useless parameters, and a lot more.
Hyperparameters are usually two types:-
Model-based hyperparameters:- These types of hyperparameters include, number of hidden layers, neurons, etc.
Algorithms based:- These types influence the speed as well as efficiencies, like learning rate in Gradient Descent, etc.
The number of hyperparameters can increase dramatically for more complex models, and tuning them manually can be quite challenging.
The benefit of the Keras tuner is that it will help in doing one of the most challenging tasks, i.e. hyperparameter tuning very easily in just some lines of code.
Keras TunerKeras tuner is a library for tuning the hyperparameters of a neural network that helps you to pick optimal hyperparameters in your neural network implement in Tensorflow.
For installation of Keras tuner, you have to just run the below command,
pip install keras-tunerBut wait!, Why do we need Keras tuner?
So, the answer is hyperparameters plays an important role in developing a good model, it can make large differences, it will help you to prevent overfitting, it will help you in having good bias and variance trade-off, and a lot more.
Tuning our hyperparameter using Keras TunerFirst, we will develop a baseline model, and then we will use Keras tuner for developing our model. I will be using Tensorflow for implementation.
Step:- 1 ( Download and Prepare the dataset ) from tensorflow import keras # importing keras (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # loading the data using keras datasets api x_train = x_train.astype('float32') / 255.0 # normalize the training images x_test = x_test.astype('float32') / 255.0 # normalize the testing images Step:- 2 ( Developing the baseline model )Now, we will build our baseline neural network using the mnist dataset that will help in recognizing the digits, so let’s build a deep neural network.
model1 = keras.Sequential() model1.add(keras.layers.Flatten(input_shape=(28, 28))) # flattening 28 x 28 model1.add(keras.layers.Dense(units=512, activation='relu', name='dense_1')) # you have 512 neurons with relu activation model1.add(keras.layers.Dropout(0.2)) # we added a dropout layer with the rate of 0.2 model1.add(keras.layers.Dense(10, activation='softmax')) # output layer, where we have total 10 classes Step:- 3 ( Compiling and Training the model )Now, we have built our baseline model, now it’s time to compile our model and train the model, we will use Adam optimizer with a learning rate of 0.0, for training we will run our model for 10 epochs, with the validation split of 0.2.
loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[‘accuracy’])
model1.fit(x_train, y_train, epochs=10, validation_split=0.2) Step:- 4 ( Evaluating our model )So, now we have trained, now we will evaluate our model on the test set, to see the model performance.
model1_eval = model.evaluate(img_test, label_test, return_dict=True) Tuning your model using Keras Tuner Step:- 1 (Importing the libraries) import tensorflow as tf import kerastuner as kt Step:- 2 (Building the model using Keras Tuner)Now, you will set up a Hyper Model (The model you set up for hypertuning is called a hypermodel), we will define your hypermodel using the model builder function, which you can see in the function below returns the compiled model with tuned hyperparameters.
In the below classification model, we will fine-tune the model hyperparameters which are several neurons as well as the learning rate of the Adam optimizer.
def model_builder(hp): ''' Args: hp - Keras tuner object ''' # Initialize the Sequential API and start stacking the layers model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value=32, max_value=512, step=32) model.add(keras.layers.Dense(units=hp_units, activation='relu', name='dense_1')) # Add next layers model.add(keras.layers.Dropout(0.2)) model.add(keras.layers.Dense(10, activation='softmax')) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4]) loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) return modelIn the above code, here are some notes:-
Int() method to define the search space for the Dense units. This allows you to set a minimum and maximum value and the step size when incrementing between these values.
Choice() method for the learning rate. This allows you to define discrete values to include in the search space when hypertuning.
Step:-3) Instantiating the tuner and tuning the hyperparametersYou will HyperBand Tuner, It is an algorithm developed for hyperparameter optimization. It uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. You can read more about this intuition here.
But the basic algorithm is below in the picture, if you are not able to understand, kindly ignore it and move forward. It’s a large topic that requires another blog.
Hyperband determines the number of models to train in a bracket by computing 1 + log_factor(max_epochs) and rounding it up to the nearest integer.
# Instantiate the tuner tuner = kt.Hyperband(model_builder, # the hypermodel objective='val_accuracy', # objective to optimize max_epochs=10, factor=3, # factor which you have seen above directory='dir', # directory to save logs project_name='khyperband') # hypertuning settings tuner.search_space_summary() Output:- # Search space summary # Default search space size: 2 # units (Int) # {'default': None, 'conditions': [], 'min_value': 32, 'max_value': 512, 'step': 32, 'sampling': None} # learning_rate (Choice) # {'default': 0.01, 'conditions': [], 'values': [0.01, 0.001, 0.0001], 'ordered': True} Step:- 4 ( Searching the best hyperparameter ) stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) # Perform hypertuning tuner.search(x_train, y_train, epochs=10, validation_split=0.2, callbacks=[stop_early]) best_hp=tuner.get_best_hyperparameters()[0] Step:- 5 ( Rebuilding and Training the Model with optimal hyperparameters ) # Build the model with the optimal hyperparameters h_model = tuner.hypermodel.build(best_hps) h_model.summary() h_model.fit(x_train, x_test, epochs=10, validation_split=0.2)Now, you can evaluate this model,
h_eval_dict = h_model.evaluate(img_test, label_test, return_dict=True) Comparison of with and Without Hyperparameter TuningBaseline Model Performance:-
BASELINE MODEL: number of units in 1st Dense layer: 512 learning rate for the optimizer: 0.0010000000474974513 loss: 0.08013473451137543 accuracy: 0.9794999957084656 HYPERTUNED MODEL: number of units in 1st Dense layer: 224 learning rate for the optimizer: 0.0010000000474974513 loss: 0.07163219898939133 accuracy: 0.979200005531311
If you have seen the timing of training of your baseline model that is more than this hyperparameter tuned model because it has lesser neurons, so it is faster.
The Hyperparameter model is more robust, you can see the loss of your baseline model and see the loss of the hyper tuned model, so we can say that is a more robust model.
End NotesThanks for reading this article, I hope that you found this article very helpful and you will implement the Keras tuner in your neural network to get better neural nets.
About the AuthorAyush Singh
I am a 14-year-old learner and machine learning and deep learning practitioner, working in the domain of Natural Language Processing, Generative Adversarial Networks, and Computer Vision. Also, I make videos on machine learning, deep learning, Gans on my youtube channel Newera. I am also a competitive coder but still practicing all the techs and a passionate learner and educator. You can connect me on Linkedin:- Ayush Singh
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
You're reading Hyperparameter Tuning Of Neural Networks Using Keras Tuner
Hyperparameter Tuning Using Randomized Search
This article was published as a part of the Data Science Blogathon.
Source: Pexels
IntrodHyperparameter tuning or optimization is important in any machine learning model training activity. The hyperparameters of a model cannot be determined from the given datasets through the learning process. Howev om the mathematical formulation of machine learning models. For example, the weights learned w a hyperparameter. The performance of a model on a dataset significantly depends on the proper tuning, i.e., finding the best combination of the model hyperparameters.
Different techniques are available for hyperparameter optimization, such as Grid Search, Randomized Search, Bayesian Optimization, etc. Today we will discuss the method and implementation of the Randomized Search. Data scientists set the model hyperparameters to control the implementation aspect of the model. Once data scientists fix the values of a model’s hyperparameters, hyperparameters can be thought of as model settings. These settings need to be tuned for each problem because the best hyperparameters for one dataset will not be the best across all datasets.
What is a Randomized Search?Source: Pexels
Example of Randomized Search space for tuning two hyperparameters
Python ImplementationLet’s see the Python-based implementation of Randomized Search. The scikit-learn module comes with some popular reference datasets, including the methods to load and fetch them easily. We will use the breast cancer Wisconsin dataset for binary classification. The breast cancer dataset is a classic and straightforward binary classification dataset.
The scikit-learn’s implementation of Randomized Search is called the RandomizedSearchCV function. Let’s see the important parameters of this function:
estimator: An object of the scikit-learn model type.
param_distributions: Dictionary with parameters names as keys and distributions or lists of parameters to search.
scoring: a scoring strategy to evaluate the performance of the cross-validated model on the test set.
Selecting too large of a number will increase the processing time. So, it trades off run time vs quality of the solution.
splitting strategy by the “cv” value in this method.
From sklearn.datasets, we will use the load_breast_cancer method to load the breast cancer Wisconsin dataset. If return_X_y is made true, then it returns (data, target).
X, y = load_breast_cancer(return_X_y=True) print(X.shape)Let’s use train_test_split to split the dataset into train and test sets:
X_train, X_test, y_train, y_test = train_test_split(X, y)We will use Standard Scalar for preprocessing the data. You can see that training data is fit transformed and test data is only transformed.
ss = StandardScaler() X_train_ss = ss.fit_transform(X_train) X_test_ss = ss.transform(X_test)First, we will use Random Forest Classifier without Randomized Search and with default values of hyperparameters.
clf = RandomForestClassifier() clf.fit(X_train_ss, y_train) y_pred = clf.predict(X_test_ss)The accuracy score can be calculated on test data and a confusion matrix can be developed:
confusion_matrix(y_test, y_pred), "Test data") acc_rf = accuracy_score(y_test, y_pred) print(acc_rf)We will use Random Forest Classifier with a Randomized Search to find out the best possible values of the hyperparameters. We are tuning five hyperparameters of the Random Forest classifier here, such as max_depth, max_features, min_samples_split, bootstrap, and criterion. Randomized Search will search through the given hyperparameters distribution to find the best values. We will also use 3 fold cross-validation scheme (cv = 3).
Once the training data is fit into the model, the best parameters from the Randomized Search can be extracted from the final result.
param_dist = {"max_depth": [3, 5], "max_features": sp_randint(1, 11), "min_samples_split": sp_randint(2, 11), "bootstrap": [True, False], "criterion": ["gini", "entropy"]} # build a classifier clf = RandomForestClassifier(n_estimators=50) # Randomized search random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=20, cv=5, iid=False) random_search.fit(X_train_ss, y_train) print(random_search.best_params_)The com
ConclusionWhile Grid Search checks for every combination of the hyperparameters, it underperforms when we need to handle big datasets. Trying all the combinations of hyperparameters on big datasets is a tedious job. If there are m number of hyperparameters for a model and each hyperparameter if we test n number of values, then Grid Search checks for mxn combination. In Randomized Search, it is assumed that not all hyperparameters are equally important. Every iteration samples a random combination of the hyperparameters, and the chances of finding a good combination are higher. The key takeaways from this article are:
In machine learning, hyperparameter optimization is crucial so that the model can be optimally trained on the given dataset. These are not learned through the learning process.
Randomized Search offers lesser processing time than Grid Search.
In Randomized Search, a fixed number of parameter settings is sampled from the specified distributions.
Python scikit-learn library implements Randomized Search in its RandomizedSearchCV function. This function needs to be used along with its parameters, such as estimator, param_distributions, scoring, n_iter, cv, etc.
Randomized Search is faster than Grid Search. However, there is a trade-off between decreased processing time and finding the optimal combinations. Randomized Search method may not guarantee to find the optimal combination of hyperparameters.
References
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
Approaching Regression With Neural Networks Using Tensorflow
This article was published as a part of the Data Science Blogathon.
IntroductionEvery supervised machine learning technique basically solves either classification or regression problems. Classification is a kind of technique where we classify the outcome into distinct categories whose range is usually finite. Whereas a regression technique involves predicting a real number whose range is infinite. Some examples of regression are predicting the price of a house given the number of bedrooms and floors in it or predicting the product rating given specifications of the product etc.
Neural networks are one of the most important algorithms that have profound applications in computer vision and natural language processing domains. Now let’s apply these neural networks on tabular data for a regression task. We will use Tensorflow and Keras deep learning library to build and train our neural network.
Let’s get started…
Reference I
About the DatasetThe dataset we are using to train our model is the Auto MPG dataset. This dataset consists of different features of commonly used automobiles during the 1980s and 90s. This includes attributes like the number of cylinders, horsepower, the weight of the car etc. Using these features we should predict the miles per gallon or mileage of the automobile. Thus making it a multi-variate regression task. More information about the dataset can be found here.
Getting StartedLet’s get started by importing the required libraries and downloading the dataset.
import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import Sequential from tensorflow.keras import optimizersNow let’s download and parse the dataset using pandas into a data frame.
column_names = [“MPG”, “Cylinders”, “Displacement”, “Horsepower”, “Weight”, “Acceleration”, “Model Year”, “Origin”] df = pd.read_csv(url, names=column_names, sep=” “,
We have directly downloaded the dataset using the pandas read_csv method and parsed it to a data frame. We can check the contents of the data frame for more details.
df.head()Now let’s preprocess this data frame to make it easier for our deep learning model to understand. Before that, it is a good practice to make a copy of the original dataset and then work on it.
# Create a copy for further processing dataset = df.copy() # Get some basic data about dataset print(len(dataset)) Data PreprocessingNow let’s preprocess our dataset. First, let’s check for null values since it is important to handle all null values before feeding the data into our model (If data consists of null values, the model won’t be able to learn resulting in null results).
# Check for null values dataset.isna().sum()Since we encountered some null values in the ‘Horsepower’ column, we can handle them easily using the Pandas library. We can usually drop all the null values using dropna() method or also fill those null values with some value like the mean of the entire column etc. Let’s use fillna() method to fill the null values.
# There are some na values. We can fill or remove those rows. dataset['Horsepower'].fillna(dataset['Horsepower'].mean(), inplace=True) dataset.isna().sum()Now we can see that there are no null values present in any columns. According to the dataset, the ‘Origin’ column is not numeric but categorical i.e. each number represents each country. So let’s encode this column by using the pandas get_dummies() method.
dataset['Origin'].value_counts() dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'}) dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='') dataset.head()For the above output, we can observe that the single ‘Origin’ column is replaced by 3 columns with the names of the countries with 1 or 0 representing the origin country.
Now, let’s split the dataset into training and testing/validation set. This is useful to test the effectiveness of the model i.e. how good the model generalises on the unseen data.
# Split the Dataset and create train and test sets train_dataset = dataset.sample(frac=0.8, random_state=0) test_dataset = dataset.drop(train_dataset.index) ######################################################## # Separate labels and features train_features = train_dataset.drop(["MPG"], axis=1) test_features = test_dataset.drop(["MPG"], axis=1) train_labels = train_dataset["MPG"] test_labels = test_dataset["MPG"]We can check some basic statistics about the data using the pandas describe() function.
# Let's check some basic data about dataset train_dataset.describe().transpose()We can now proceed with the next data preprocessing step: Normalization.
Data normalization is one of the basic preprocessing steps which will convert the data into a format that the model can easily process. In this step, we scale the data such that the mean will be 0 and the standard deviation will be 1. We will use the sci-kit learn library to do the same.
# But we can also apply normalization using sklearn. from sklearn.preprocessing import StandardScaler feature_scaler = StandardScaler() label_scaler = StandardScaler() ######################################################## # Fit on Training Data feature_scaler.fit(train_features.values) label_scaler.fit(train_labels.values.reshape(-1, 1)) ######################################################## # Transform both training and testing data train_features = feature_scaler.transform(train_features.values) test_features = feature_scaler.transform(test_features.values) train_labels = label_scaler.transform(train_labels.values.reshape(-1, 1)) test_labels = label_scaler.transform(test_labels.values.reshape(-1, 1))We do the fit and transform separately because the fit will learn the data representation of the input data and transform will apply the learnt representation. In this way, we will be able to avoid looking at the representation/statistics of the test data.
Now let’s get into the most exciting part of the process: Building our neural network.
Creating the ModelLet’s create our model by using the Keras Sequential API. We can stack the required layers into the sequential model and then define the required architecture. Now let’s create a basic fully connected dense neural network for our data.
# Now let's create a Deep Neural Network to train a regression model on our data. model = Sequential([ layers.Dense(32, activation='relu'), layers.Dense(64, activation='relu'), layers.Dense(1) ])We have defined the sequential model by adding 2 dense layers with 32 and 64 units respectively, both using the Rectified Linear Unit activation function i.e. Relu. Finally, we are adding a dense layer with 1 neuron representing the output dimension i.e. a single number. Now let’s compile the model by specifying the loss function and optimizer.
loss=”mean_squared_error”)
For our model, we are using the RMSProp optimizer and the mean squared error loss function. These are important parameters for our model because the optimizer defines how our model will be improved and loss defines what will be improved.
It is recommended to try updating/playing with the above model by using different layers or changing the optimizers and loss function and observing how the model performance improves or worsens.
Now we are ready to train the model !!
Model Training and EvaluationNow let’s train our model by specifying the training features and labels for 100 epochs. We can also pass the validation data to periodically check how our model is performing.
# Now let's train the model history = model.fit(epochs=100, x=train_features, y=train_labels, validation_data=(test_features, test_labels), verbose=0)Since we specified the verbose=0, we won’t get any model training log for every epoch. We can use the model history to plot how our model has performed during the training. Let’s define a function for the same.
# Function to plot loss def plot_loss(history): plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label='val_loss') plt.ylim([0,10]) plt.xlabel('Epoch') plt.ylabel('Error (Loss)') plt.legend() plt.grid(True) ######################################################## plot_loss(history)Loss Plot
Now we can see that our model has been able to achieve the least loss during the training. This is due to the fully connected layers through which our model was able to easily detect the patterns in the dataset.
Now finally let’s evaluate the model on our testing dataset.
# Model evaluation on testing dataset model.evaluate(test_features, test_labels)The model performed well on the testing dataset! We can save this model and use the saved model to predict some other data.
# Save model model.save("trained_model.h5")Now, we can load the model and perform predictions.
# Load and perform predictions saved_model = models.load_model('trained_model.h5') results = saved_model.predict(test_features) ######################################################## # We can decode using the scikit-learn object to get the result decoded_result = label_scaler.inverse_transform(results.reshape(-1,1)) print(decoded_result)Oh great !! Now we can see the predictions i.e. decoded data given the input features into our model.
ConclusionIn the blog on regression with neural networks, we have trained the baseline regression model using deep neural networks on a small and simple dataset and predicted the required output. As a challenge for learning, it is recommended to train more models on bigger datasets by changing the architecture and trying different hyperparameters such as optimizers and loss functions. Hope you liked my article on regression with neural networks.
References About the AuthorI’m Narasimha Karthik, Deep Learning Practioner. I’m a final-year undergraduate student at PES University. Currently working with Computer Vision and NLP. Experience in working with Tensorflow and PyTorch frameworks. You can contact me through LinkedIn and Twitter.
Read more articles on our blog.
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
Extreme Networks: Networking Portfolio Review
Extreme Networks works to provide products that give companies cloud-driven networking solutions.
As companies produce and rely on more data, networking is the critical backbone for how computers are linked to share data. Extreme Networks is a leader in the sector that is using a range of technologies to design and support modern networks: machine learning (ML), artificial intelligence (AI), analytics, and automation.
See below to learn all about Extreme Networks and where they stand in the networking space:
The networking market has grown across many sub-sectors, including both software-defined networking (SDN) and cloud-driven networking.
The global SDN market is estimated to be $18.5 billion in 2023 and $36.2 billion by 2026, according to StrategyR.
The global cloud-driven networking market was valued at estimated $3.32 billion in 2023 and is projected to reach $14.61 billion in 2027, according to Fortune Business Insights.
In fiscal year 2023, Extreme Networks’ net revenue was over $1 billion, up 6% year over year.
“We have seen enormous opportunity and growth within the cloud managed wired and wireless networking industry,” says Will Hopson, a senior territory manager, Extreme Networks.
In Extreme Networks’ 2023 annual report, the company highlights that through cloud-driven networking and automation, network administrators can scale how they provide “productivity, availability, accessibility, manageability, security, and speed, regardless of how distributed the network is.”
For more: Top Enterprise 5G Networks
One Network
Wired platforms
Universal Switches
ExtremeXOS Switches
VSP Switches
Wireless platforms
Universal APs (WiFi 6/6E APs)
Indoor/outdoor APs
Wireless controllers
Security
ExtremeNAC
Extreme SD-WAN
AirDefense
Fabric services
Fabric Connect
One Cloud
AIOps
ExtremeCloud IQ
ExtremeCloud IQ-Site Engine
ExtremeCloud IQ-Controller
Business insights
Extreme Analytics
Extreme Location
Extreme Guest
One Extreme
Professional services
Maintenance services
Customer success
Reduced security risk: with hyper-segmentation of network
Increased savings: Extreme Networks reports traditional network infrastructure, operational, and admin savings up to 10%
Improved support and problem resolution: Extreme reports that 94% of Extreme customer help calls are resolved by the network specialist who answers the call and co-locates support with engineering
Enhanced network intelligence: through the ability to have a 360-degree view of the network, users, devices, and applications
Use latest software: Universal hardware platforms allow customers to evolve deployment models by changing the software while using existing hardware
Jackpot Junction Casino Hotel in Minnesota was in need of network management for their wireless networks and devices that are secure and compliant for gaming regulations. With the risk of outages and downtime, Jackpot Junction’s IT team needed to keep the casino up and running.
Nick Potter, director of IT, was aware the casino needed a new approach to their network management, including the need to automate processes and for visibility of their pain points to help the IT team handle the network demand and support guest services.
Potter decided that Extreme Networks could help with their problem. Adding the network management solution increases network uptime and enables the casino to keep up with the initiatives, including self-service guest kiosks. By future-proofing the latest gaming devices, the staff was also able to get easier access to tablets for the hotel and restaurant.
Jackpot Junction’s networking solution from Extreme included ExtremeSwitching, FabricConnect, and ExtremeCloud IQ.
“It’s just kind of magic how you move a device and the Fabric recognizes it and routes it,” Potter says.
Jackpot Junction’s networking implementation by Extreme Networks also allowed the casino to support the Lower Sioux Native American community nearby with networking, including high-speed, secure connectivity for their government buildings, community center and health care facility.
Dynamic role-based policies:
support scalable policy mechanism on wired and wireless devices for users, devices, and applications through the network
Application hosting:
run onboard applications alongside the switch operating system (OS), without impacting performance; provide network insight through on-board analytics applications; enable new network applications without the need for a separate hardware device
Artificial intelligence (AI)/machine learning (ML)-driven insights:
ongoing integration with ExtremeCloud IQ, offers ability to fine-tune the network, before issues become impact service
Innovation:
new networking ideas coming out of the company, according to various customers
For more: 5 Top Private 5G Trends
Users give mostly positive reviews to several Extreme Networks products, which are partly driven by AI:
PeerSpot: 4.1 out of 5
PeerSpot: 4.3 out of 5
CRN 2023 “Networking Products of the Year”:
due to their “enhance insight, visibility, and control” in networking
Gartner 2023 “Magic Quadrant for Enterprise Wired and Wireless LAN Infrastructure”:
for their operations in distributed environments and delivering in-demand resources and capabilities
CRN 2023 “Top 10 Coolest New Networking Products”:
due to a subsection of their ExtremeCloud IQ platform called ExtremeCoPilot
CRN 2023 “Data Center 50”:
one of the key players in the data center market.
2023 “IT Champion, Networking” by Computerwoche:
for network infrastructure
Extreme Networks helps companies establish their networks at the software level as well as the hardware level with switches and routers. With reported benefits such as reduced security risk and network-based savings, Extreme Network is a leading player in the growing networking market.
“Our world is more reliant on technology than ever before, and as a technology company it is our duty to ensure we are making a positive impact,” says Katy Motiey, chief admin and sustainability officer, Extreme Networks.
A company looking for networking solutions to build out or upgrade a network can consider Extreme Networks as a provider with focused options.
For more products: NordLayer: Network Security Review
Tech Giants Move Toward Social Networks
SANTA CLARA — As the Facebook generation becomes a bigger part of the enterprise, companies face the challenge of implementing increasingly familiar social network technologies in concert with legacy systems. That was one of the themes expressed by a panel of leading vendors here at the Collaborate 2.0 conference sponsored by SD Forum.
“In IT, a user is a login; on Facebook, a user is a profile with a picture and other details. That’s pretty empowering. End users are driving change,” said Chuck Ganapathi, senior vice president of products at chúng tôi (NYSE: CRM).
Photo: David Needle
The next generation of IT applications may well leverage something like Facebook’s look and feel for a logical reason. “Facebook has over 300 million users now and is on the way to training half a billion people on what is really a pretty sophisticated application — there’s a lot going on there,” Ganapathi said.
And as these collaborative, social network technologies inevitably spread, Ganapathi said a key issue to be resolved is IT control versus user power.
“Your employees are going to download Yammer because it’s a better way to communicate,” he said. “Getting to a happy medium is going to be very important in the enterprise.”
But it’s also not just about allowing blogs or adding a corporate wiki, he said.
“There are lots of tools today to make the conversations in your company more social, but what about the data that’s sitting there in Excel, in ERP, in e-mail? How you make that data social is going to be key.”
Microsoft has lit a FUSEMatt Thompson, general manager of Microsoft’s (NASDAQ: MSFT) developer and platform evangelism in Silicon Valley, said the software giant is ready to make moves in the social network/collaboration space beyond its already successful SharePoint software. He said Microsoft Research has about 25 different social collaboration projects they’ve put under one group called FUSE Labs.
“You’re going to see some innovative stuff under social collaboration,” he said. “We have a vision for where this is going in the future. Video and telepresence is a key piece. And you’ll see a lot more interoperability as well. This can’t be a single stack.”
Thompson noted that Facebook execs have said they have no plans to develop a private version or social graph for the enterprise, though they haven’t ruled out working with partners — one of which is Microsoft, which owns a stake in the social networking phenom.
“Internal IT is a very fertile ground to disrupt,” Thompson said. “The key is there won’t be multiple social graphs. I don’t think Facebook realizes the big role they have.”
That said, Thompson gave Facebook big props for opening up its platform to let users take their Facebook identity with them when visiting other sites.
Thompson also took note of Twitter, which he said he loves. Like Facebook Connect, he said a huge percentage of users use the service without chúng tôi as a starting point.
“They’re delivering collaboration at 140 characters wherever the user may be,” he said.
Cisco and the future of workLike Microsoft, Cisco (NASDAQ: CSCO) is investing in multiple social network and collaborative areas, including a portfolio of nine businesses in the incubation stage.
Photo: David Needle
“Our thesis is that we’re on the cusp of a big transformation like the Internet in the ’90s around the future of work, putting people and productivity back into the equation,” said Didier Moretti, vice president of business incubation in Cisco’s Emerging Technologies group.
IBM (NYSE: IBM) is another company very much on the social network bandwagon. Roosevelt Bynum, who manages the company’s developerWorks Web applications, said the My developerWorks community site is a “like Facebook for geeks.”
(Update: An earlier version of this article incorrectly referred to Cisco as the development partner that helped Starbucks create the MyStarbucksIdea site. The partner was actually chúng tôi which powers the site.
Article courtesy of chúng tôi
Understanding Neural Network: A Beginner’s Guide
The term “neural network” is derived from the work of a neuroscientist, Warren S. McCulloch and Walter Pitts, a logician, who developed the first conceptual model of an artificial neural network. In their work, they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output. In the computing world, neural networks are organized on layers made up of interconnected nodes which contain an activation function. These patterns are presented to the network through the input layer which further communicates it to one or more hidden layers. The hidden layers perform all the processing and pass the outcome to the output layer.
Neural networks are typically used to derive meaning from complex and non-linear data, detect and extract patterns which cannot be noticed by the human brain. Here are some of the standard applications of neural network used these days. # Pattern/ Image or object recognition # Times series forecasting/ Classification # Signal processing # In self-driving cars to manage control # Anomaly detection These applications fall into different types of neural networks such as convolutional neural network, recurrent neural networks, and feed-forward neural networks. The first one is more used in image recognition as it uses a mathematical process known as convolution to analyze images in non-literal ways. Let’s understand neural network in R with a dataset. The dataset consists of 724 observations and 7 variables. “Companies.Changed” , “Experience.Score”, “Test.Score”, “Interview.Score”, “Qualification.Index”, “age”, “Status” The following codes runs the network classifying ‘Status’ as a function of several independent varaibles. Status refers to recruitment with two variables: Selected and Rejected. To go ahead, we first need to install “neuralnet” package >library(neuralnet) >HRAnalytics<-read.csv(“filename.csv”) > temp<-HRAnalytics Now, removing NA from the data > temp <-na.omit(temp) > dim(temp) # 724 rows and 7 columns) [1] 724 7 > y<-( temp$Status ) # Assigning levels in the Status Column > levels(y)<-c(-1,+1) > class(y) [1] “factor” # Now converting the factors into numeric > y<-as.numeric (as.character (y)) > y <-as.data.frame(y) > names(y)<-c(“Status”) Removing the existing Status column and adding the new one Y > temp$ Status <-NULL > temp <-cbind(temp ,y) > temp <-scale( temp ) > chúng tôi (100) > n=nrow( temp ) The dataset will be split up in a subset used for training the neural network and another set used for testing. As the ordering of the dataset is completely random, we do not have to extract random rows and can just take the first x rows. > train <- sample (1:n, 500, FALSE) > f<- Status ~ Companies.Changed+Experience.Score+Test.Score+Interview.Score+Qualification.Index+age Now we’ll build a neural network with 3 hidden nodes. We will Train the neural network with backpropagation. Backpropagation refers to the backward propagation of error. > fit <- neuralnet (f, data = temp [train ,], hidden =3, algorithm = “rprop+”) Plotting the neural network > plot(fit, intercept = FALSE ,show.weights = TRUE)
The above plot gives you an understanding of all the six input layers, three hidden layers, and the output layer. > z<-temp > z<-z[, -7] The compute function is applied for computing the outputs based on the independent variables as inputs from the dataset. Now, let’s predict on testdata (-train) > pred <- compute (fit, z[-train,]) > sign(pred$net.result ) Now let’s create a simple confusion matrix: > table(sign(pred$net.result),sign( temp[-train ,7])) -1 1 -1 108 20 1 36 60 (108+60)/(108+20+36+60) [1] 0.75 Here, the prediction accuracy is 75% I hope the above example helped you understand how neural networks tune themselves to find the right answer on their own, increasing the accuracy of the predictions. Please note that the acceptable level of accuracy is considered to be over 80%. Unlike any other technique, neural networks also have certain limitations. One of the major limitation is that the data scientist or analyst has no other role than to feed the input and watch it train and gives the output. One of the article mentions that “with backpropagation, you almost don’t know what you’re doing”. If we just ignore the negatives, neural network has huge application and is a promising and practical form of machine learning. In the recent times, the best-performing artificial-intelligence systems in areas such as autonomous driving, speech recognition, computer vision, and automatic translation are all aided by neural networks. Only time will tell, how this field will emerge and offer intelligent solutions to problems we still have not thought of.
Update the detailed information about Hyperparameter Tuning Of Neural Networks Using Keras Tuner on the Achiashop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!