Trending December 2023 # Gliding Algorithm Lets Drones Surf The Winds For Hours # Suggested January 2024 # Top 13 Popular

You are reading the article Gliding Algorithm Lets Drones Surf The Winds For Hours updated in December 2023 on the website Achiashop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Gliding Algorithm Lets Drones Surf The Winds For Hours

Launch an unmanned SBXC sailplane into the air with a catapult and it glides lazily down again in about three minutes. But when the sailplane is equipped with software to find plumes of warm air rising from the ground, known as thermals, it can ride them upward and extend its flight to a record-breaking five hours.

The software, known as ALOFT, for Autonomous Locator of Thermals, was developed by Dr. Dan Edwards and colleagues at the Naval Research Laboratory. The team tested and launched more than 20 flights with the software at the Phillips Army Airfield in Maryland last October. In total, those flights lasted more than 30 hours, proving that using algorithms to update data on thermals can help unmanned sailplanes fly much longer than previously possible.

“The biggest challenge for autonomous soaring is teasing out the skills that a soaring pilot uses into an algorithm that an autopilot can follow,” says Edwards. “We have spent many hours watching the auto-soaring algorithm work, but behave not quite the way that makes the most sense, only then to return to the computer and tweak the programming.”

ALOFT uses the sensors already built into the aircraft for airspeed and altitude pressures, along with GPS and inertial navigation, to ‘feel’ when it is getting lift from a thermal. ALOFT then steers the drone around in a tight loop to keep the aircraft inside the rising column of air.

UAV images for Rob Walters; SunPower arrays and the SBXC airframe; courtesy U.S. Naval Research Laboratory

Five hours may seem like a long time to stay up using nothing but warm air, but far more is possible. A recent study found that frigate birds fly continuously for several months at a stretch, riding thermals continuously without coming down to settle on the water. If drone are to do the same they need to be better at locating thermals.

The original system relied on running into thermals by chance. Thermals tend to form in the same place repeatedly, often over tarmac or bare rock which soaks up the sun better than the surrounding terrain. So the researchers gave ALOFT a memory of where it had found a thermal before.

“At the beginning of the week, the aircraft wandered around aimlessly looking for thermals,” says Edwards. “By the end of the week, the aircraft would move between frequent hot-spots and almost immediately find good lift. That was super exciting.”

Researchers monitored how the software improved flight paths at Phillips Army Airfield in Maryland. courtesy U.S. Naval Research Laboratory

Another approach, which the team worked on with help from Pennsylvania State University’s Air Vehicle Intelligence and Autonomy Laboratory, was to have two soaring drones working co-operatively. As soon as one found a thermal it signaled the other. This matches the behavior of soaring birds, which often fly over to share a thermal. Edwards found that birds would join in when ALOFT was soaring.

“It’s a patriotic feeling watching a bald eagle come over to soar in the same thermal as your robots,” says Edwards.

The cooperative approach will be even more effective with more drones.

A third approach is to equip the drone with sensors such as LIDAR allowing it to detect a thermal in the distance. This would require additional hardware, whereas the basic ALOFT can be easily added to almost any drone with an autopilot.

Basic sensors may help too: researchers found that frigate birds look for cumulus clouds, a sure sign of an updraft carrying moist air upward, and glider pilots may look for grass or insects carried up on a thermal.

The basic ALOFT software can be easily added to almost any drone with an autopilot. courtesy U.S. Naval Research Laboratory

Soaring algorithms could be retrofitted to existing small drones carrying out tasks like mapping or surveying. Pausing to gain altitude on a thermal might mean the drone takes longer to reach a given destination, but greatly extends the mission time.

“ALOFT could be programmed into nearly any autopilot,” says Edwards. “We’ve adapted ALOFT for three different aircraft and have been successful in soaring with all three.”

Edwards says that the next project is to integrate the soaring software with solar power and a hydrogen fuel cell to create a hybrid craft with extreme endurance.

“The Naval Research Laboratory’s Ion Tiger aircraft flew for forty-eight hours on hydrogen before,” says Edwards. “Adding the auto-soaring and solar systems is projected to double the endurance with only minor modifications to the aircraft.”

ALOFT is being re-engineered to fly on new hardware in an NRL program called Solar Soaring, and the combination of solar and soaring should fly in October.

Soaring could provide an easy boost to the long-endurance solar unmanned aircraft like those being developed by Google and Facebook as an alternative to satellites, providing communications to remote or under-served areas. And solar-plus-soaring drones might travel the world.

“One day in the future, I can imagine a UAV flying across the ocean on just the power from the sun and the wind,” says Edwards.

You're reading Gliding Algorithm Lets Drones Surf The Winds For Hours

Predictions For The Future: Google’s Algorithm Updates

Editor Note: Many points in this post are the author’s speculation/prediction of the future and not concrete fact.

Everyone is talking about “Mobilegeddon,” the recent change Google made to its search algorithm, penalizing sites that do not provide a good experience for mobile users. However, there was another recent update, which passed largely unnoticed and accordingly received an even more fitting name – Phantom.

Why does it matter? You need to anticipate how Google search will evolve in the near future so you can plan and adjust accordingly.

In this blog post I will try to make a prediction about how the company will evolve the search algorithm in order to make sure it continues to rule the internet.

The Google Algorithm Changes Forecast

The main point to remember is that search is the pinnacle of Google’s business strategy, so the company has to do everything in its power to protect it and keep it healthy. Search is all about the user – Google does not want you to be unhappy with your search experience, and they most certainly don’t want you to switch to an alternative search engine.

In order to do this they will continue to focus on relevance and ease of use.

Quality of Content Will Continue to Rule

Starting with Panda, Google has really made website owners and SEO experts focus on the quality of website content. The search engine will continue to get better at judging content quality thanks to developing its artificial intelligence. Some experts even suggest that the time when the algorithm becomes better at predicting quality and relevance than humans is not far away.

However, human factor will not be overlooked, either. With search beginning to feature tweets, it is clear that social signals will come to bear more influence on rankings. But it is unclear whether this will remain a long-term feature.

Usability Will Become More Important

Another important trend in usability improvement is the rise of conversational search. The introduction of the Hummingbird update in 2013 overhauled the way keywords work in order to make SEO more aligned with the way people are searching. Again, with the improvement in Google’s ability to predict relevance, optimizing for searchers’ intention will become more important than targeting specific keywords.

Changes Will be Much More Gradual

If you’ve been reading carefully, you have probably already noticed there’s a spirit of slower, more gradual change. Perhaps the most obvious proof of this is the name of one of the updates that affected users in the last month – Phantom. But even Mobilegeddon, regardless of its name, didn’t affect as many people as was initially expected.

I think this trend will continue as Google will be much more strategic and careful about the changes they are making.

Google Will Continue to Experiment with Temporary Changes

When mentioning Twitter and social signals, it is important to point out that this might turn out to be short-lived. The search team has already proven they are not afraid to move quickly and scrap features that do not fulfill their potential. This was the case of Google Authorship and might very well turn out to be the same for the Google-Twitter partnership and social signals in general.

Here’s why:

With such partnerships, Google is not only harming its own products (G+ in this case), but also becomes reliant on external parties for data.

Social signals are unreliable and notoriously easy to falsify.

The algorithm might develop to the point where Google no longer needs to factor in signals from third-party social networks.

However, the company may elect to follow the opposite strategy and ensure its interest by acquiring Twitter. Reports of an imminent merger have already surfaced in the media. Such a step will be a very strong signal that the search algorithm will come to rely heavily on social signals.

How to be Prepared for Future Google Algorithm Updates

One of the safest predictions to make is that the search giant is not going to alter its policy of being extremely secretive about the changes it makes in the way the engine works. However, that doesn’t mean there’s nothing you can do to be prepared for those changes and make the best of them.

Here are some areas where you should direct your attention and resources if you want to be ready for the next time Google decides to alter their policy:

Focus on Producing High-Quality Content

Producing any content isn’t enough, so if you’re finding it challenging to put out great material on a regular basis, talk to someone who can take it over for you. The money you spend on such a partnership will return to you in new customers.

And while we’re on the subject, you might also want to consider someone who understands SEO. Even if you are pumping out engaging content on a regular basis, there are plenty of people out there who are doing the same. Optimizing it for search engines might be the differentiator you need to succeed.

Invest in Improving User Experience and Engagement

Going after websites that have no mobile version is just the first step. Do not forget that Google has all this useful information about how visitors are interacting with your website (through Analytics) that helps them find out whether your page provides good experience or not.

Keep an eye on vital statistics, such as page load time, bounce rate, time on page, and other key data points. And since Google will continue to provide differentiated experience to its users across different platforms (desktop, mobile, etc.), remember to keep track of your performance across each of these.

That doesn’t mean you have to give equal attention to each platform; focus on those most relevant to your business goals.

Plan for the Future

It really isn’t that hard to be prepared for Google updates to the search algorithm. All you have to do is focus on providing relevant information and good experience to your visitors.

Of course, that is easier said than done, but if you remain aware of changes happening in the industry and how people search and browse, it is not impossible to achieve it. For example, the trend toward providing better mobile web experience was present for a few years, so Google’s stance on it with Mobilegeddon was hardly surprising.

And while few people can predict the exact changes Google will introduce, every one of us can and should adopt a mentality of constantly planning for the future.

Featured Image: Denys Prykhodov via Shutterstock

What Are The Best Algorithm Tips For Digital Marketing?

At first, algorithms were used in manufacturing processes or as tools for market information. But in the modern world, when commerce and technology are pervasive, we may perform predictive analyses using the characteristics we want or already have. Algorithms can be employed in the field of digital marketing to manage customers and effectively offer the appropriate material to them.

A customer’s habits, actions, interests, and experience data must form the foundation of an algorithm for it to be effective. With this, you can choose who receives what depending on factors like hobbies, age, and domicile. The term for this is audience segmentation.

Algorithms are predetermined sets of instructions needed to carry out certain tasks. Algorithms are simply sets of instructions. Our daily activities include things like cleaning the bathroom and baking desserts.

Importance of Algorithms in Digital Marketing

Tell me about social media algorithms first, though. This is a method of relevance-sorting posts in a user’s feed. Content is prioritized according to its potential and relevancy, yes. Many social media networks exhibit diverse behaviors from algorithms. Yet they do share something. These systems populate your feed with posts from chosen accounts. Marketers have long faced difficulties with algorithms. Everyone does not want irrelevant stuff to surface in their feeds, though, given the deluge of social media content. Understanding how social media algorithms operate is an art that we have already mastered.

Algorithms and Social Platforms

When it concerns social networking sites, most people are familiar with algorithms. An algorithm chooses the posts that will show up in your feed. Social media algorithms discover your interests and suggest users and accounts for you to connect with.

The parameters for each social media platform’s algorithms vary slightly. Engagement is the key to using Instagram. This implies that the wider your reach, the more interaction your account receives.

Facebook communicates with you through your close friends. Based on your preferences, this circle enlarges to display pertinent information.

Twitter displays the most recent news in your feed based on your location and interests. TikTok is dedicated to sounds, hashtags, and hobbies. LinkedIn investigates relationships, relevance, and engagement potential.

Algorithms for Digital Marketers

Many online platforms, including Google, place an emphasis on user-friendliness that motivates users and web designers to produce relevant and high-quality material. Online algorithms enable users to engage with customers in a more personal way by ranking material and people. Marketers are beginning to take use of these algorithms to build their brands and enhance their offerings.

Based on your online activity, each social media network has its own algorithm that it employs to display information. As a result, there is no one online marketing strategy that works for everyone.

The A* search method, which aids in locating the fastest route to a specific location, is another helpful application for efficient time management. Knowing what needs to be done first will help you complete the task more quickly.

Examples of Best Algorithms Instagram

Of all social media platforms, Instagram does have the highest rate of engagement, and its algorithms make sure that the content that has received the most likes is shown. This algorithm’s prioritization of posts from accounts that people routinely and consistently interact with is another aspect.

The more recent your post is, the more likely it is that other people will recommend it. Engage with followers within the first hour of uploading if you want your material on Instagram to get seen.

Google Ads Pinterest

Original, recent, and fresh pins are required, as well as high-quality photographs. A maximum of fifty pieces of content can be scheduled by digital marketers each day.

Linkedin

LinkedIn is a popular and effective marketing platform among B2B marketers. LinkedIn gives material from individuals who post frequently and with whom it has already interacted more priority. The platform gives native content, such as text postings, videos, and photographs with text, priority over information on company pages.

Conclusion

More issues and more fixes. In the field of marketing, excellent solutions are becoming more prevalent. The technology that supports algorithms makes our work considerably simpler and more efficient. We must remember that humanity deserves to value such a valuable tool, nevertheless.

Instead of becoming overwhelmed by the always-altering social media platform algorithms, marketers must learn how to benefit from improvements. A stronger digital approach increases brand recognition and customer loyalty, resulting in increased sales. Sales are directly affected by the use of digital marketing effectively.

Gradient Boosting Algorithm: A Complete Guide For Beginners

This article was published as a part of the Data Science Blogathon

Introduction

In this article, I am going to discuss the math intuition behind the Gradient boosting algorithm. It is more popularly known as Gradient boosting Machine or GBM. It is a boosting method and I have talked more about boosting in this article.

Gradient boosting is a method standing out for its prediction speed and accuracy, particularly with large and complex datasets. From Kaggle competitions to machine learning solutions for business, this algorithm has produced the best results. We already know that errors play a major role in any machine learning algorithm. There are mainly two types of error, bias error and variance error. Gradient boost algorithm helps us minimize bias error of the model

Before getting into the details of this algorithm we must have some knowledge about AdaBoost Algorithm which is again a boosting method. This algorithm starts by building a decision stump and then assigning equal weights to all the data points. Then it increases the weights for all the points which are misclassified and lowers the weight for those that are easy to classify or are correctly classified. A new decision stump is made for these weighted data points. The idea behind this is to improve the predictions made by the first stump. I have talked more about this algorithm here. Read this article before starting this algorithm to get a better understanding.

The main difference between these two algorithms is that Gradient boosting has a fixed base estimator i.e., Decision Trees whereas in AdaBoost we can change the base estimator according to our needs.

Table of Contents

What is Boosting technique?

Gradient Boosting Algorithm

Gradient Boosting Regressor

Example of gradient boosting

Gradient Boosting Classifier

Implementation using Scikit-learn

Parameter Tuning in Gradient Boosting (GBM) in Python

End Notes

Table of contents

About the Author

What is boosting?

While studying machine learning you must have come across this term called Boosting. It is the most misinterpreted term in the field of Data Science. The principle behind boosting algorithms is first we built a model on the training dataset, then a second model is built to rectify the errors present in the first model. Let me try to explain to you what exactly does this means and how does this works.

Suppose you have n data points and 2 output classes (0 and 1). You want to create a model to detect the class of the test data. Now what we do is randomly select observations from the training dataset and feed them to model 1 (M1), we also assume that initially, all the observations have an equal weight that means an equal probability of getting selected.

Remember in ensembling techniques the weak learners combine to make a strong model so here M1, M2, M3….Mn all are weak learners.

Since M1 is a weak learner, it will surely misclassify some of the observations. Now before feeding the observations to M2 what we do is update the weights of the observations which are wrongly classified. You can think of it as a bag that initially contains 10 different color balls but after some time some kid takes out his favorite color ball and put 4 red color balls instead inside the bag. Now off-course the probability of selecting a red ball is higher. This same phenomenon happens in Boosting techniques, when an observation is wrongly classified, its weight get’s updated and for those which are correctly classified, their weights get decreased. The probability of selecting a wrongly classified observation gets increased hence in the next model only those observations get selected which were misclassified in model 1.

Similarly, it happens with M2, the wrongly classified weights are again updated and then fed to M3. This procedure is continued until and unless the errors are minimized, and the dataset is predicted correctly. Now when the new datapoint comes in (Test data) it passes through all the models (weak learners) and the class which gets the highest vote is the output for our test data.

What is a Gradient boosting Algorithm?

The main idea behind this algorithm is to build models sequentially and these subsequent models try to reduce the errors of the previous model. But how do we do that? How do we reduce the error? This is done by building a new model on the errors or residuals of the previous model.

When the target column is continuous, we use Gradient Boosting Regressor whereas when it is a classification problem, we use Gradient Boosting Classifier. The only difference between the two is the “Loss function”. The objective here is to minimize this loss function by adding weak learners using gradient descent. Since it is based on loss function hence for regression problems, we’ll have different loss functions like Mean squared error (MSE) and for classification, we will have different for e.g log-likelihood.

Understand Gradient Boosting Algorithm with example

Let’s understand the intuition behind Gradient boosting with the help of an example. Here our target column is continuous hence we will use Gradient Boosting Regressor.

Following is a sample from a random dataset where we have to predict the car price based on various features. The target column is price and other features are independent features.

Image Source: Author

Step -1 The first step in gradient boosting is to build a base model to predict the observations in the training dataset. For simplicity we take an average of the target column and assume that to be the predicted value as shown below:

Image Source: Author

Why did I say we take the average of the target column? Well, there is math involved behind this. Mathematically the first step can be written as:

Looking at this may give you a headache, but don’t worry we will try to understand what is written here.

Here L is our loss function

Gamma is our predicted value

argmin means we have to find a predicted value/gamma for which the loss function is minimum.

Since the target column is continuous our loss function will be:

Here yi is the observed value

And gamma is the predicted value

Now we need to find a minimum value of gamma such that this loss function is minimum. We all have studied how to find minima and maxima in our 12th grade. Did we use to differentiate this loss function and then put it equal to 0 right? Yes, we will do the same here.

Let’s see how to do this with the help of our example. Remember that y_i is our observed value and gamma_i is our predicted value, by plugging the values in the above formula we get:

We end up over an average of the observed car price and this is why I asked you to take the average of the target column and assume it to be your first prediction.

Hence for gamma=14500, the loss function will be minimum so this value will become our prediction for the base model.

Step-2 The next step is to calculate the pseudo residuals which are (observed value – predicted value)

Image Source: Author

Here F(xi) is the previous model and m is the number of DT made.

We are just taking the derivative of loss function w.r.t the predicted value and we have already calculated this derivative:

If you see the formula of residuals above, we see that the derivative of the loss function is multiplied by a negative sign, so now we get:

The predicted value here is the prediction made by the previous model. In our example the prediction made by the previous model (initial base model prediction) is 14500, to calculate the residuals our formula becomes:

In the next step, we will build a model on these pseudo residuals and make predictions. Why do we do this? Because we want to minimize these residuals and minimizing the residuals will eventually improve our model accuracy and prediction power. So, using the Residual as target and the original feature Cylinder number, cylinder height, and Engine location we will generate new predictions. Note that the predictions, in this case, will be the error values, not the predicted car price values since our target column is an error now.

Let’s say hm(x) is our DT made on these residuals.

Step- 4 In this step we find the output values for each leaf of our decision tree. That means there might be a case where 1 leaf gets more than 1 residual, hence we need to find the final output of all the leaves. TO find the output we can simply take the average of all the numbers in a leaf, doesn’t matter if there is only 1 number or more than 1.

Let’s see why do we take the average of all the numbers. Mathematically this step can be represented as:

Here hm(xi) is the DT made on residuals and m is the number of DT. When m=1 we are talking about the 1st DT and when it is “M” we are talking about the last DT.

The output value for the leaf is the value of gamma that minimizes the Loss function. The left-hand side “Gamma” is the output value of a particular leaf. On the right-hand side [Fm-1(xi)+ƴhm(xi))] is similar to step 1 but here the difference is that we are taking previous predictions whereas earlier there was no previous prediction.

Image Source

We see 1st residual goes in R1,1  ,2nd and 3rd residuals go in R2,1 and 4th residual goes in R3,1 .

Let’s calculate the output for the first leave that is R1,1

Now we need to find the value for gamma for which this function is minimum. So we find the derivative of this equation w.r.t gamma and put it equal to 0.

Hence the leaf R1,1 has an output value of -2500. Now let’s solve for the R2,1

Let’s take the derivative to get the minimum value of gamma for which this function is minimum:

We end up with the average of the residuals in the leaf R2,1 . Hence if we get any leaf with more than 1 residual, we can simply find the average of that leaf and that will be our final output.

Now after calculating the output of all the leaves, we get:

Image Source: Author

Step-5 This is finally the last step where we have to update the predictions of the previous model. It can be updated as:

where m is the number of decision trees made.

Since we have just started building our model so our m=1. Now to make a new DT our new predictions will be:

Image Source: Author

Here Fm-1(x) is the prediction of the base model (previous prediction) since F1-1=0 , F0 is our base model hence the previous prediction is 14500.

nu is the learning rate that is usually selected between 0-1. It reduces the effect each tree has on the final prediction, and this improves accuracy in the long run. Let’s take nu=0.1 in this example.

Hm(x) is the recent DT made on the residuals.

Let’s calculate the new prediction now:

Image Source: Author

Suppose we want to find a prediction of our first data point which has a car height of 48.8. This data point will go through this decision tree and the output it gets will be multiplied with the learning rate and then added to the previous prediction.

Now let’s say m=2 which means we have built 2 decision trees and now we want to have new predictions.

This time we will add the previous prediction that is F1(x) to the new DT made on residuals. We will iterate through these steps again and again till the loss is negligible.

Image Source: Author

If a new data point says height = 1.40 comes, it’ll go through all the trees and then will give the prediction. Here we have only 2 trees hence the datapoint will go through these 2 trees and the final output will be F2(x).

What is Gradient Boosting Classifier?

A gradient boosting classifier is used when the target column is binary. All the steps explained in the Gradient boosting regressor are used here, the only difference is we change the loss function. Earlier we used Mean squared error when the target column was continuous but this time, we will use log-likelihood as our loss function.

Let’s see how this loss function works, to read more about log-likelihood I recommend you to go through this article where I have given each detail you need to understand this.

The loss function for the classification problem is given below:

Our first step in the gradient boosting algorithm was to initialize the model with some constant value, there we used the average of the target column but here we’ll use log(odds) to get that constant value. The question comes why log(odds)?

When we differentiate this loss function, we will get a function of log(odds) and then we need to find a value of log(odds) for which the loss function is minimum.

Confused right? Okay let’s see how it works:

Let’s first transform this loss function so that it is a function of log(odds), I’ll tell you later why we did this transformation.

Now this is our loss function, and we need to minimize it, for this, we take the derivative of this w.r.t to log(odds) and then put it equal to 0,

Here y are the observed values

You must be wondering that why did we transform the loss function into the function of log(odds). Actually, sometimes it is easy to use the function of log(odds), and sometimes it’s easy to use the function of predicted probability “p”.

It is not compulsory to transform the loss function, we did this just to have easy calculations.

Hence the minimum value of this loss function will be our first prediction (base model prediction)

Now in the Gradient boosting regressor our next step was to calculate the pseudo residuals where we multiplied the derivative of the loss function with -1. We will do the same but now the loss function is different, and we are dealing with the probability of an outcome now.

After finding the residuals we can build a decision tree with all independent variables and target variables as “Residuals”.

Now when we have our first decision tree, we find the final output of the leaves because there might be a case where a leaf gets more than 1 residuals, so we need to calculate the final output value. The math behind this step is out of the scope of this article so I will mention the direct formula to calculate the output of a leaf:

Finally, we are ready to get new predictions by adding our base model with the new tree we made on residuals.

There are a few variations of gradient boosting and a couple of them are momentarily clarified in the coming article.

Implementation Using scikit-learn

The task here is to classify the income of an individual, when given the required inputs about his personal life.

First, let’s import all required libraries.

# Import all relevant libraries from sklearn.ensemble import GradientBoostingClassifier import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, confusion_matrix from sklearn import preprocessing import warnings warnings.filterwarnings("ignore") Now let’s read the dataset and look at the columns to understand the information better. df = pd.read_csv('income_evaluation.csv') df.head()

I have already done the data preprocessing part and you can look whole code chúng tôi my main aim is to tell you how to implement this on python. Now for training and testing our model, the data has to be divided into train and test data.

We will also scale the data to lie between 0 and 1.

# Split dataset into test and train data X_train, X_test, y_train, y_test = train_test_split(df.drop(‘income’, axis=1),df[‘income’], test_size=0.2)

Now let’s go ahead with defining the Gradient Boosting Classifier along with it’s hyperparameters. Next, we will fit this model on the training data.

# Define Gradient Boosting Classifier with hyperparameters gbc=GradientBoostingClassifier(n_estimators=500,learning_rate=0.05,random_state=100,max_features=5 ) # Fit train data to GBC gbc.fit(X_train,y_train)

The model has been trained and we can now observe the outputs as well.

Below, you can see the confusion matrix of the model, which gives a report of the number of classifications and misclassifications.

# Confusion matrix will give number of correct and incorrect classifications print(confusion_matrix(y_test, gbc.predict(X_test))) # Accuracy of model print("GBC accuracy is %2.2f" % accuracy_score( y_test, gbc.predict(X_test)))

Let’s check the classification report also:

from sklearn.metrics import classification_report pred=gbc.predict(X_test) print(classification_report(y_test, pred)) Parameter Tuning in Gradient Boosting (GBM) in Python Tuning n_estimators and Learning rate

n_estimators is the number of trees (weak learners) that we want to add in the model. There are no optimum values for learning rate as low values always work better, given that we train on sufficient number of trees. A high number of trees can be computationally expensive that’s why I have taken few number of trees here.

from sklearn.model_selection import GridSearchCV grid = {     'learning_rate':[0.01,0.05,0.1],     'n_estimators':np.arange(100,500,100), } gb = GradientBoostingClassifier() gb_cv = GridSearchCV(gb, grid, cv = 4) gb_cv.fit(X_train,y_train) print("Best Parameters:",gb_cv.best_params_) print("Train Score:",gb_cv.best_score_) print("Test Score:",gb_cv.score(X_test,y_test))

We see the accuracy increased from 86 to 89 after tuning n_estimators and learning rate. Also the “true positive” and the “true negative” rate improved.

We can also tune max_depth parameter which you must have heard in decision trees and random forests.

grid = {'max_depth':[2,3,4,5,6,7] } gb = GradientBoostingClassifier(learning_rate=0.1,n_estimators=400) gb_cv = GridSearchCV(gb, grid, cv = 4) gb_cv.fit(X_train,y_train) print("Best Parameters:",gb_cv.best_params_) print("Train Score:",gb_cv.best_score_) print("Test Score:",gb_cv.score(X_test,y_test))

The accuracy has increased even more when we tuned the parameter “max_depth”.

End Notes

I hope you got an understanding of how the Gradient Boosting algorithm works under the hood. I have tried to show you the math behind this is the easiest way possible.

In the next article, I will explain Xtreme Gradient Boosting (XGB), which is again a new technique to combine various models and to improve our accuracy score. It is just an extension of the gradient boost algorithm.

About the Author

I am an undergraduate student currently in my last year majoring in Statistics (Bachelors of Statistics) and have a strong interest in the field of data science, machine learning, and artificial intelligence. I enjoy diving into data to discover trends and other valuable insights about the data. I am constantly learning and motivated to try new things.

I am open to collaboration and work.

For any doubt and queries, feel free to contact me on Email

Connect with me on LinkedIn and Twitter

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Drones, Ai, And Smart Meetings At The Beginning Of The Microsoft Build Conference

Microsoft kicked off its annual conference for developers, called Build, on Monday. “The world is becoming a computer,” Satya Nadella, the company’s CEO, said towards the beginning of his keynote address, describing the way that computing power is starting to be found everywhere, from cars to drones to homes. That pretty unsurprising idea, plus heavy doses of talk about artificial intelligence, AI and accessibility, mixed reality, chatbots, the cloud, and the “edge”—the computers and phones you actually interact with—defined the first hours of the event.

We got to see Alexa and Cortana hanging out

We live in a world of talking digital helpers, from Siri, to the Google Assistant, to Amazon Alexa, and Microsoft’s version, Cortana. While Amazon and Microsoft announced back in August of last year that the two companies would be collaborating to make their two virtual assistants work together, today we saw a version of that in action.

If you’re imagining Alexa and Cortana freely talking to each other like two robotic hosts in Westworld, you are out of luck. However, what they showed was still interesting.

Meghan Saunders, general manager for Cortana at Microsoft, and Tom Taylor, a senior vice president for Alexa at Amazon, joined each other on stage for a demonstration. Speaking into an Echo, Saunders added an item to her shopping list through Alexa, then asked Alexa to open Cortana. From there, Cortana spoke to Saunders through the Echo, read her schedule out loud, then helped her send an email to Taylor.

Taylor then started speaking with Cortana from a computer, then asked it to open Alexa. Alexa spoke to him through the computer, then called him an Uber to a restaurant called Harvest Vine.

The system, still in beta, feels a little bit silly—asking one virtual assistant to let you speak with another seems less efficient than just only speaking to one of them—but still, it’s nice to see the robots getting along and it’s conceivable that this be helpful for some people in specific situations.

You can sign up here to be notified with more info on this collaboration.

Drones and AI

Microsoft is working with drone-making giant DJI, and showcased the intersection of artificial intelligence and unmanned aerial vehicles on stage. In a demonstration, a DJI Mavic Air drone flew around and live-streamed a video feed of industrial-looking pipes onstage that had a simulated problem with them; a laptop receiving the livestream used AI on it to inspect the video in real time and identify an anomaly, shown with a yellow box around it on screen.

It’s easy to see how this kind of feature would be helpful for industries with tons of equipment to inspect—a human flies a drone, and instead of people eyeballing everything, the AI looks for problems and highlights them. And since the AI analysis is happening right on the laptop (it can also run directly onboard a bigger, fancier drone call the DJI M200), the company’s data doesn’t have to go up to a cloud for analysis.

Smart meetings

At another moment, Microsoft’s Raanah Amjadi showed a concept of how a prototype device could help out during a meeting. They simulated a meeting about “smart buildings” right on stage that felt very futuristic and incredibly canned.

But the pyramid-like prototype device on the table, equipped with the ability to both listen and see the meeting, did cool stuff. For one thing, it was able to visually identify—and then greet outloud—the people who physically walked into the meeting, saying “Hello Dave,” when someone named Dave Brown entered.

On a screen in the meeting room, the system recognized who was talking and took down a transcript in real time of what everyone said. In a column next to the transcript, the AI also made a note of follow-up items that appeared to be automatically generated when someone said the phrase “follow up” in a sentence. The set-up can also give a remote worker a live translation into a different language.

So if you’re excited for a future where there’s automatically a record of every silly thing you say in a meeting and the follow-up items are instantly written down, Microsoft could someday make you happy.

The Week In Drones: Cameras That Follow You, Retro Police Copters, And More

Here’s a roundup of the week’s top drone news, designed to capture the military, commercial, non-profit, and recreational applications of unmanned aircraft.

Spy From The Past

Paleofuture recently unearthed a police drone — from 1976. The Westland Wisp was a prototype surveillance drone designed for police work. Foreshadowing future drones, it could transmit regular video (and maybe infrared) images back to a control station. After the Wisp, Westland made two more drone helicopters, the Wideye and the Sharpeye.

The Westland Wisp

Don’t mind me, just carrying around a flying police state here.

A Possible Private Privacy Violation

Long rumored, it appears the feared arrival of a “peeping Tom with a drone” finally happened this week. In Seattle, a woman called police after seeing a drone hovering outside her window.

**Update: **It appears that while the drone was outside the window, its purpose was not capturing lewd photographs. Instead, the drone company was apparently trying to capture an aerial panoramic view for a developer.

Following Filmographer

Hexo+ is a drone that carries a camera and automatically follows a specified person. Currently a crazily well-funded project on kickstarter, the drone can fly at about 45 mph for 15 minutes. Set up with a smartphone app, the drone then follows and films a person with an attached camera. The end result? Awesome video footage of one’s jogging, and one annoying flying machine closer to a future of robot smog.

Watch a video about it below:

New FAA Rules For Tiny Drones

We’re living in a weird pre-regulatory limbo of drones before drone law. Particularly challenged for the Federal Aviation Administration are small drones and model airplanes, which until the past decade were largely indistinguishable. The FAA wants to keep model airplane hobbyists happy at the same time they tightly restrict all commercial uses of drones. This challenge is inherent in new guidelines the FAA put out this week for model airplane use.

FAA Model Airplane Guidelines Detail

The most obvious impact of these rules are on drone delivery services, a common drone gimmick. More revealing is the first row, which says it’s okay to have model airplane clubs but not okay for those clubs to have contests with cash prizes. It’s good that the FAA is trying to regulate drones, but attempts like this to distinguish between hobbyist model airplanes and small drones have unintended consequences for both communities.

A Predator Drone Waits At Balad Air Base, Iraq. 2004

Back To Baghdad

The United States revealed this week that, as part of their renewed presence in Iraq, they are flying armed Predator drones as part of reconnaissance efforts there. The drones are flying from bases in Kuwait, according to the Pentagon.

In related news, Stimson Center, a DC think tank, released an in-depth study of the role of drones and targeted killing. The report, co-authored by retired Army general John P. Abizaid and defense analyst Rosa Brooks, calls into question several of the myths concerning drones at war. Perhaps the biggest finding is that we don’t even know if drone strikes are working towards any strategic goals. The report states:

Update the detailed information about Gliding Algorithm Lets Drones Surf The Winds For Hours on the Achiashop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!