Trending December 2023 # Stable Diffusion Prompt: A Definitive Guide # Suggested January 2024 # Top 17 Popular

You are reading the article Stable Diffusion Prompt: A Definitive Guide updated in December 2023 on the website Achiashop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Stable Diffusion Prompt: A Definitive Guide

Developing a process to build good prompts is the first step every Stable Diffusion user tackles. This article summarizes the process and techniques developed through experimentations and other users’ inputs. The goal is to write down all I know about prompts, so you can know them all in one place.

Anatomy of a good prompt

A good prompt needs to be detailed and specific. A good process is to look through a list of keyword categories and decide whether you want to use any of them.

The keyword categories are

Subject

Medium

Style

Artist

Website

Resolution

Additional details

Color

Lighting

An extensive list of keywords from each category is available in the prompt generator. You can also find a short list here.

You don’t have to include keywords from all categories. Treat them as a checklist to remind you what could be used.

Let’s review each category and generate some images by adding keywords from each. I will use the v1.5 base model. To see the effect of the prompt alone, I won’t be using negative prompts for now. Don’t worry, we will study negative prompts in the later part of this article. All images are generated with 30 steps of DPM++ 2M Karas sampler and an image size 512×704.

Subject

The subject is what you want to see in the image. A common mistake is not writing enough about the subjects.

Let’s say we want to generate a sorceress casting magic. A newbie may just write

A sorceress

That leaves too much room for imagination. How do you want the sorceress to look? Any words describing her that would narrow down her image? What does she wear? What kind of magic is she casting? Is she standing, running, or floating in the air? What’s the background scene?

Stable Diffusion cannot read our minds. We have to say exactly what we want.

A common trick for human subjects is to use celebrity names. They have a strong effect and are an excellent way to control the subject’s appearance. However, be aware that these names may change not only the face but also the pose and something else. I will defer this topic to a later part of this article.

As a demo, let’s cast the sorceress to look like Emma Watson, the most used keyword in Stable Diffusion. Let’s say she is powerful and mysterious and uses lightning magic. We want her outfit to be very detailed so she would look interesting.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing

We get Emma Watson 11 out of 10 times. Her name is such a strong effect on the model. I think she’s popular among Stable Diffusion users because she looks decent, young, and consistent across a wide range of scenes. Trust me, we cannot say the same for all actresses, especially the ones who have been active in the 90s or earlier…

Medium

Medium is the material used to make artwork. Some examples are illustration, oil painting, 3D rendering, and photography. Medium has a strong effect because one keyword alone can dramatically change the style.

Let’s add the keyword digital painting.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting

We see what we expected! The images changed from photographs to digital paintings. So far so good. I think we can stop here. Just kidding.

Style

The style refers to the artistic style of the image. Examples include impressionist, surrealist, pop art, etc.

Let’s add hyperrealistic, fantasy, surrealist, full body to the prompt.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body

Mmm… not sure if they have added much. Perhaps these keywords were already implied by the previous ones. But I guess it doesn’t hurt to keep it.

Artist

Artist names are strong modifiers. They allow you to dial in the exact style using a particular artist as a reference. It is also common to use multiple artist names to blend their styles. Now let’s add Stanley Artgerm Lau, a superhero comic artist, and Alphonse Mucha, a portrait painter in the 19th century.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha

We can see the styles of both artists blending in and taking effect nicely.

Website

Niche graphic websites such as Artstation and Deviant Art aggregate many images of distinct genres. Using them in a prompt is a sure way to steer the image toward these styles.

Let’s add artstation to the prompt.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha, artstation

It’s not a huge change but the images do look like what you would find on Artstation.

Resolution

Resolution represents how sharp and detailed the image is. Let’s add keywords highly detailed and sharp focus.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha, artstation, highly detailed, sharp focus

Well, not a huge effect perhaps because the previous images are already pretty sharp and detailed. But it doesn’t hurt to add.

Additional details

Additional details are sweeteners added to modify an image. We will add sci-fi, stunningly beautiful and dystopian to add some vibe to the image.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha, artstation, highly detailed, sharp focus, sci-fi, stunningly beautiful, dystopian

Color

You can control the overall color of the image by adding color keywords. The colors you specified may appear as a tone or in objects.

Let’s add some golden color to the image with the keyword iridescent gold.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha, artstation, highly detailed, sharp focus, sci-fi, stunningly beautiful, dystopian, iridescent gold

The gold comes out great!

Lighting

Any photographer would tell you lighting is a key factor in creating successful images. Lighting keywords can have a huge effect on how the image looks. Let’s add cinematic lighting and dark to the prompt.

Emma Watson as a powerful mysterious sorceress, casting lightning magic, detailed clothing, digital painting, hyperrealistic, fantasy, Surrealist, full body, by Stanley Artgerm Lau and Alphonse Mucha, artstation, highly detailed, sharp focus, sci-fi, stunningly beautiful, dystopian, iridescent gold, cinematic lighting, dark

This complete our example prompt.

Remarks

As you may have notice, the images are already pretty good with a few keywords added to the subject. When it comes to building a prompt for Stable Diffusion, often you don’t need to have many keywords to get good images.

Negative prompt

Using negative prompts is another great way to steer the image, but instead of putting in what you want, you put in what you don’t want. They don’t need to be objects. They can also be styles and unwanted attributes. (e.g. ugly, deformed)

Using negative prompts is a must for v2 models. Without it, the images would look far inferior to v1’s. They are optional for v1 models, but I routinely use them because they either help or don’t hurt.

I will use a universal negative prompt. You can read more about it if you want to understand how it works.

ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face, blurry, draft, grainy

With universal negative prompt.

The negative prompt helped the images to pop out more, making them less flat.

Process of building a good prompt

Iterative prompt building

You should approach prompt building as an iterative process. As you see from the previous section, the images could be pretty good with just a few keywords added to the subject.

I always start with a simple prompt with subject, medium, and style only. Generate at least 4 images at a time to see what you get. Most prompts do not work 100% of the time. You want to get some idea of what they can do statistically.

Add at most two keywords at a time. Likewise, generate at least 4 images to assess its effect.

Using negative prompt

You can use an universal negative prompt if you are starting out.

Adding keywords to the negative prompt can be part of the iterative process. The keywords can be objects or body parts you want to avoid (Since v1 models are not very good at rendering hands, it’s not a bad idea to use “hand” in the negative prompt to hide them.)

Prompting techniques

You can modify a keyword’s importance by switching to a different one at a certain sampling step.

Keyword weight

(This syntax applies to AUTOMATIC1111 GUI.)

You can adjust the weight of a keyword by the syntax (keyword: factor). factor is a value such that less than 1 means less important and larger than 1 means more important.

For example, we can adjust the weight of the keyword dog in the following prompt

dog, autumn in paris, ornate, beautiful, atmosphere, vibe, mist, smoke, fire, chimney, rain, wet, pristine, puddles, melting, dripping, snow, creek, lush, ice, bridge, forest, roses, flowers, by stanley artgerm lau, greg rutkowski, thomas kindkade, alphonse mucha, loish, norman rockwell.

(dog: 0.5)

dog

(dog: 1.5)

Increasing the weight of dog tends to generate more dogs. Decreasing it tends to generate fewer. It is not always true for every single image. But it is true in a statistical sense.

This technique can be applied to subject keywords and all categories, such as style and lighting.

() and [] syntax

(This syntax applies to AUTOMATIC1111 GUI.)

An equivalent way to adjust keyword strength is to use () and []. (keyword) increases the strength of the keyword by a factor of 1.1 and is the same as (keyword:1.1). [keyword] decrease the strength by a factor of 0.9 and is the same as (keyword:0.9).

You can use multiple of them, just like in Algebra… The effect is multiplicative.

Similarly, the effects of using multiple [] are

Keyword blending

(This syntax applies to AUTOMATIC1111 GUI.)

You can mix two keywords. The proper term is prompt scheduling. The syntax is

[keyword1 : keyword2: factor]

factor controls at which step keyword1 is switched to keyword2. It is a number between 0 and 1.

For example, if I use the prompt

Oil painting portrait of [Joe Biden: Donald Trump: 0.5]

for 30 sampling steps.

That means the prompt in steps 1 to 15 is

Oil painting portrait of Joe Biden

And the prompt in steps 16 to 30 becomes

Oil painting portrait of Donald Trump

The factor determines when the keyword is changed. it is after 30 steps x 0.5 = 15 steps.

The effect of changing the factor is blending the two presidents to different degrees.

You may have noticed Trump is in a white suit which is more of a Joe outfit. This is a perfect example of a very important rule for keyword blending: The first keyword dictates the global composition. The early diffusion steps set the overall composition. The later steps refine details.

Quiz: What would you get if you swapped Donald Trump and Joe Biden?

Blending faces

A common use case is to create a new face with a particular look, borrowing from actors and actresses. For example, [Emma Watson: Amber heard: 0.85], 40 steps is a look between the two:

When carefully choosing the two names and adjusting the factor, we can get the look we want precisely.

Poor man’s prompt-to-prompt

Using keyword blending, you can achieve effects similar to prompt-to-prompt, generating pairs of highly similar images with edits. The following two images are generated with the same prompt except for a prompt schedule to substitute apple with fire. The seed and number of steps were kept the same.

holding an [apple: fire: 0.9]

holding an [apple: fire: 0.2]

The factor needs to be carefully adjusted. How does it work? The theory behind this is the overall composition of the image was set by the early diffusion process. Once the diffusion is trapped in a small space, swapping any keywords won’t have a large effect on the overall image. It would only change a small part.

How long can a prompt be?

Depending on what Stable Diffusion service you are using, there could be a maximum number of keywords you can use in the prompt. In the basic Stable Diffusion v1 model, that limit is 75 tokens.

Note that tokens are not the same as words. The CLIP model Stable Diffusion uses automatically converts the prompt into tokens, a numerical representation of words it knows. If you put in a word it has not seen before, it will be broken up into 2 or more sub-words until it knows what it is. The words it knows are called tokens, which are represented as numbers. For example, dream is one token, beach is one token. But dreambeach are two tokens because the model doesn’t know this word, and so the model breaks the word up to dream and beach which it knows.

Prompt limit in AUTOMATIC1111

AUTOMATIC1111 has no token limits. If a prompt contains more than 75 tokens, the limit of the CLIP tokenizer, it will start a new chunk of another 75 tokens, so the new “limit” becomes 150. The process can continue forever or until your computer runs out of memory…

Each chunk of 75 tokens is processed independently, and the resulting representations are concatenated before feeding into Stable Diffusion’s U-Net.

In AUTOMATIC1111, You can check the number of tokens by looking at the small box at the top right corner of the prompt input box.

Token counter in AUTOMATIC1111

Checking keywords

The fact that you see people using a keyword doesn’t mean that it is effective. Like homework, we all copy each other’s prompts, sometimes without much thought.

You can check the effectiveness of a keyword by just using it as a prompt. For example, does the v1.5 model know the American painter Henry Asencio? Let’s check with the prompt

henry asencio

Positive!

How about the Artstation sensation wlop?

wlop

Well, doesn’t look like it. That’s why you shouldn’t use “by wlop”. That’s just adding noise.

Josephine Wall is a resounding yes:

You can use this technique to examine the effect of mixing two or more artists.

Henry asencio, Josephine Wall

Limiting the variation

To be good at building prompts, you need to think like Stable Diffusion. At its core, it is an image sampler, generating pixel values that we humans likely say it’s legit and good. You can even use it without prompts, and it would generate many unrelated images. In technical terms, this is called unconditioned or unguided diffusion.

The prompt is a way to guide the diffusion process to the sampling space where it matches. I said earlier that a prompt needs to be detailed and specific. It’s because a detailed prompt narrows down the sampling space. Let’s look at an example.

castle

castle, blue sky background

wide angle view of castle, blue sky background

By adding more describing keywords in the prompt, we narrow down the sampling of castles. In We asked for any image of a castle in the first example. Then we asked to get only those with a blue sky background. Finally, we demanded it is taken as a wide-angle photo.

The more you specify in the prompt, the less variation in the images.

Association effect

Attribute association

Some attributes are strongly correlated. When you specify one, you will get the other. Stable Diffusion generates the most likely images that could have an unintended association effect.

Let’s say we want to generate photos of women with blue eyes.

a young female with blue eyes, highlights in hair, sitting outside restaurant, wearing a white outfit, side light

Blue eyes

What if we change to brown eyes?

a young female with brown eyes, highlights in hair, sitting outside restaurant, wearing a white outfit, side light

Brown eyes

Nowhere in the prompts, I specified ethnicity. But because people with blue eyes are predominantly Europeans, Caucasians were generated. Brown eyes are more common across different ethnicities, so you will see a more diverse sample of races.

Stereotyping and bias is a big topic in AI models. I will confine to the technical aspect in this article.

Association of celebrity names

Every keyword has some unintended associations. That’s especially true for celebrity names. Some actors and actresses like to be in certain poses or wear certain outfits when taking pictures, and hence in the training data. If you think about it, model training is nothing but learning by association. If Taylor Swift (in the training data) always crosses her legs, the model would think leg crossing is Taylor Swift too.

Prompt: full body taylor swift in future high tech dystopian city, digital painting

When you use Taylor Swift in the prompt, you may mean to use her face. But there’s an effect of the subject’s pose and outfit too. The effect can be studied by using her name alone as the prompt.

Poses and outfits are global compositions. If you want her face but not her poses, you can use keyword blending to swap her in at a later sampling step.

Association of artist names

Perhaps the most prominent example of association is seen when using artist names.

The 19th-century Czech painter Alphonse Mucha is a popular occurrence in portrait prompts because the name helps generate interesting embellishments, and his style blends very well with digital illustrations. But it also often leaves a signature circular or dome-shaped pattern in the background. They could look unnatural in outdoor settings.

Prompt: digital painting of [Emma Watson:Taylor Swift: 0.6] by Alphonse Mucha. (30 steps)

Embeddings are keywords

Embeddings, the result of textual inversion, are nothing but combinations of keywords. You can expect them to do a bit more than what they claim.

Let’s see the following base images of Ironman making a meal without using embeddings.

Prompt: iron man cooking in kitchen.

Style-Empire is an embedding I like to use because it adds a dark tone to portrait images and creates an interesting lighting effect. Since it was trained on an image with a street scene at night, you can expect it adds some blacks AND perhaps buildings and streets. See the images below with the embedding added.

Prompt: iron man cooking in kitchen Style-Empire.

Note some interesting effects

The background of the first image changed to city buildings at night.

Iron man tends to show his face. Perhaps the training image is a portrait?

So even if an embedding is intended to modify the style, it is just a bunch of keywords and can have unintended effects.

Effect of custom models

Using a custom model is the easiest way to achieve a style, guaranteed. This is also a unique charm of Stable Diffusion. Because of the large open-source community, hundreds of custom models are freely available.

When using a model, we need to be aware that the meaning of a keyword can change. This is especially true for styles.

Let’s use Henry Asencio again as an example. In v1.5, his name alone generates:

Using DreamShaper, a model fine-tuned for portrait illustrations, with the same prompt gives

It is a very decent but distinctly different style. The model has a strong basis for generating clear and pretty faces, which has been revealed here.

So make sure to check when you use a style in custom models. van Gogh may not be van Gogh anymore!

Region-specific prompts

Do you know you can specify different prompts for different regions of the image?

For example, you can put the moon at the top left:

Or at the top right:

You can do that by using the Regional Prompter extension. It’s a great way to control image composition!

You're reading Stable Diffusion Prompt: A Definitive Guide

How To Run Stable Diffusion 2.0 And A First Look

Stable Diffusion 2.0 was released. Improvements include among other things, using a larger text encoder (which improves image quality) and increased default image size to 768×768 pixels.

Different from 1.5, NSFW filter was applied to the training data so you can expect images generated would be sanitized. Images from celebrity and commercial artists were also suppressed.

In this article, I will cover 3 ways to run Stable diffusion 2.0: (1) Web services, (2) local install and (3) Google Colab.

In the second part, I will compare images generated with Stable Diffusion 1.5 and 2.0. I will share some thoughts on how 2.0 should be used and in which way it is better than v1.

Web services

This is the easiest option. Go visit the websites below and put in your prompt.

Currently there are only limited web options available. But more should be coming in the next few weeks.

Here’s a list of websites you can run Stable Diffusion 2.0

The settings are limited to none.

Local install

Install base software

We will go through how to use Stable Diffusion 2.0 in AUTOMATIC1111 GUI. Follow the installation instruction on your respective environment.

This GUI can be installed quite easily in Windows systems. You will need a dedicated GPU card with at least 6GB VRAM to go with this option.

Download Stable diffusion 2.0 files

After installation, you will need to download two files to use Stable Diffusion 2.0.

Download the model file (768-v-ema.ckpt)

Download the config file, rename it to 768-v-ema.yaml

Put both of them in the model directory:

stable-diffusion-webui/models/Stable-diffusion

Google Colab

Stable Diffusion 2.0 is also available in Colab notebook in the Quick Start Guide , among a few other popular models.

A good option if you don’t have dedicated GPU cards. You don’t need a paid account, though it helps to prevent disconnection and get a GPU instance at busy times.

Go to Google Colab and start a new notebook.

In the runtime menu, select change runtime type. In hardware accelerator field, select GPU.

First, you will need to download the AUTOMATIC1111 repository.

And then upgrade to python 3.10.

!sudo apt-get update -y !sudo apt-get install python3.10 !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 2 !sudo apt-get install python3.10-distutils

Download Stable Diffusion 2.0 model and config files.

Finally, run the GUI. You should change the username and password below.

%cd stable-diffusion-webui !python chúng tôi --share --gradio-auth username:password

This step is going to take a while so be patient. When it is done, you should see a message:

Follow the link to start the GUI.

Using Stable Diffusion 2.0

Select the Stable Diffusion 2.0 checkpoint file 768-v-ema.ckpt.

Since the model is trained on 768×768 images, make sure to set the width and height to 768. 30 steps of DPM++2M Karras sampler works well for most images.

Comparing v1 and v2 models

The first thing many people do is to compare images between v1 and v2.0.

Some care needs to be taken when doing the comparison.

Make sure to set the image size to 512×512 when using v1.4 or v1.5. It was fine tuned with images of that size and 768×768 will not do well.

Set image size to 768×768 when using v2.0.

(Note: According to the announcement, v2 is designed to generate both 512×512 and 768×768 images. Although early testing seem to suggest 512×512 is not as good, it may just be some software setting issues we need to iron out.)

Don’t reuse v1 prompts

Prompts that work for v1 models may not work the same in v2. This is expected because v2 has switched to the much larger OpenClip H/14 text encoder (nearly 6 times larger than the v1 model) and is trained from scratch.

Here’s v2.0 generation with the same prompt. Not that far off but I like v1.4’s better.

Stable Diffusion 2.0 images.

That’s not to say v2.0 is not good but the prompt is optimized for v1.4.

This prompt didn’t work so well for v2.0….

Stable Diffusion 2.0 images

It generates more a realistic style which was not what I wanted.

I don’t think v2.0 is worse. The prompt just needs to be re-optimized.

If you have to reuse v1 prompt…

You can try the prompt converter which work by first generating a image with v1 and interrogate the prompt with CLIP interrogator 2. Effectively, it gives a prompt what the language model would describe.

The prompt of the failed ink drips prompt is

[amber heard:Ana de Armas:0.7], (touching face:1.2), shoulder, by agnes cecile, half body portrait, extremely luminous bright design, pastel colors, (ink drips:1.3), autumn lights

which is translated to

a painting of a woman with watercolors on her face, a watercolor painting by Ignacy Witkiewicz, tumblr, process art, intense watercolor, expressive beautiful painting, painted in bright water colors

The images are not quite the same as v1 but they do look better.

v2.0 images generated using the prompt translated to v2.

v1 techniques are usable

My first impression is that many of the techniques developed for v1 would still work. For example, keyword blending works quite well. The effect of using celebrity name appears to be reduced though.

Stable Diffusion 2.0 images generated with keyword blending.

Prompt building

One observation I have is Stable Diffusion 2.0 works better with longer, more specific prompts. Note that this is also true for v1 but it seems to be even more so for v2.

To illustrate this point, below are images generated using a single-word prompt “cat”.

We got expected result using v1.5:

v1.5 images with the prompt “cat”.

Below is what I got with the same prompt using v2.0.

v2.0 images with the prompt “cat”.

They are still kind of related to cat but not quite what a user might expect.

What if we use a longer, more specific prompt?

A photo of a Russian forrest cat wearing sunglasses relaxing on a beach

v1.5 images

v2.0 images

Here’s where Stable Diffusion 2.0 shines: It generates higher quality images in the sense that they matches the prompt more closely.

This is likely the benefit of the larger language model which increases the expressiveness of the network. 2.0 is able to understand text prompt a lot better than v1 models and allow you to design prompts with higher precision.

Summary

It’s still the early day of Stable Diffusion 2.0. We have just got the software running and we are actively exploring. I will write more when I find out how to use 2.0 more effectively. Stay tuned!

How To Use Stable Diffusion To Create Ai

Artificial intelligence chatbots, like ChatGPT, have become incredibly powerful recently – they’re all over the news! But don’t forget about AI image generators (like Stable Diffusion, DALL-E, and Midjourney). They can make virtually any image when provided with just a few words. Follow this tutorial to learn how to do this for free with no restrictions by running Stable Diffusion on your computer.

Good to know: learn how to fix the internal server error for ChatGPT.

What Is Stable Diffusion?

Stable Diffusion is a free and open source text-to-image machine-learning model. Basically, it’s a program that lets you describe a picture using text, then creates the image for you. It was given billions of images and accompanying text descriptions and was taught to analyze and reconstruct them.

Stable Diffusion is not the program you use directly – think of it more like the underlying software tool that other programs use. This tutorial shows how to install a Stable Diffusion program on your computer. Note that there are many programs and websites that use Stable Diffusion, but many will charge you money and don’t give you as much control.

System Requirements

The rough guidelines for what you should aim for are as follows:

macOS: Apple Silicon (an M series chip)

Windows or Linux: NVIDIA or AMD GPU

RAM: 16GB for best results

GPU VRAM: at least 4GB

Storage: at least 15GB

Install AUTOMATIC1111 Web UI

We are using the AUTOMATIC1111 Web UI program, available on all major desktop operating systems, to access Stable Diffusion. Make sure you make note of where the “stable-diffiusion-webui” directory gets downloaded.

AUTOMATIC1111 Web UI on macOS

In Terminal, install Homebrew by entering the command:

Copy the two commands for adding Homebrew to your PATH and enter them.

Quit and reopen Terminal, then enter:

brew

install

cmake protobuf rust python

@

3.10

git

wget

Enter:

Download the latest stable version of Python 3.10.

AUTOMATIC1111 Web UI on Linux

Open the Terminal.

Enter one of the following commands, depending on your flavor of Linux:

Debian-based, including Ubuntu:

sudo

apt-get update

sudo

apt

install

wget

git

python3 python3-venv

Red Hat-based:

sudo

dnf

install

wget

git

python3

Arch-based:

sudo

pacman

-S

wget

git

python3

Install in “/home/$(whoami)/stable-diffusion-webui/” by executing this command:

Install a Model

You’ll still need to add at least one model before you can start using the Web UI.

Go to CIVITAI.

Move the .safetensors file downloaded in step 2 into your “stable-diffiusion-webui/models/Stable-diffusion” folder.

Run and Configure the Web UI

At this point, you’re ready to run and start using the Stable Diffusion program in your web browser.

Paste the link in your browser address bar and hit Enter. The Web UI website will appear.

Scroll down and check “Enable quantization in K samplers for sharper and cleaner results.”

FYI: If you need to find an image source, use Google.

Use txt2txt to Generate Concept Images

Now comes the fun part: creating some initial images and searching for one that most closely resembles the look you want.

Go to the “txt2img” tab.

In the first prompt text box, type words describing your image separated by commas. It helps to include words describing the style of image, such as “realistic,” “detailed,” or “close-up portrait.”

In the negative prompt text box below, type keywords that you do not want your image to look like. For instance, if you’re trying to create realistic imagery, add words like “video game,” “art,” and “illustration.”

Scroll down and set “Batch size” to “4.” This will make Stable Diffusion produce four different images from your prompt.

Make the “CFG Scale” a higher value if you want Stable Diffusion to follow your prompt keywords more strictly or a lower value if you want it to be more creative. A low value (like the default of 7) usually produces images that are good quality and creative.

If you don’t like any of the images, repeat steps 1 through 5 with slight variations.

Finding the Prompts Used for Past Images

After you’ve generated a few images, it’s helpful to get the prompts and settings used to create an image after the fact.

Upload an image into the box. All of the prompts and other details of your image will appear on the right.

Tip: use one of these Windows tools to batch-edit images.

Use img2img to Generate Similar Images

You can use the img2img feature to generate new images mimicking the overall look of any base image.

On the “img2img” tab, ensure that you are using a previously generated image with the same prompts.

Set the “Denoising strength” value higher or lower to regenerate more or less of your image (0.50 regenerates 50% and 1 regenerates 100%).

Rewrite the prompts to add completely new elements to the image and adjust other settings as desired.

Use inpaint to Change Part of an Image

The inpaint feature is a powerful tool that lets you make precise spot corrections to a base image by using your mouse to “paint” over parts of an image that you want to regenerate. The parts you haven’t painted aren’t changed.

Change your prompts if you want new visual elements.

Use your mouse to paint over the part of the image you want to change.

Change the “Sampling method” to DDIM, which is recommended for inpainting.

Set the “Denoising strength,” choosing a higher value if you’re making extreme changes.

Good to know: look through these websites to find images with a transparent background.

Upscale Your Image

You’ve been creating relatively small images at 512 x 512 pixels up to this point, but if you increase your image’s resolution, it also increases the level of visual detail.

Install the Ultimate SD Upscale Extension

Resize Your Image

On the “img2img” tab, ensure you are using a previously generated image with the same prompts. At the front of your prompt input, add phrases such as “4k,” “UHD,” “high res photo,” “RAW,” “closeup,” “skin pores,” and “detailed eyes” to hone it in more. At the front of your negative prompt input, add phrases such as “selfie,” “blurry,” “low res,” and “phone cam” to back away from those.

Set your “Denoising strength” to a low value (around 0.25) and double the “Width” and “Height” values.

In the “Script” drop-down, select “Ultimate SD upscale,” then under “Upscaler,” check the “R-ESRGAN 4x+” option.

Frequently Asked Questions What is the difference between Stable Diffusion, DALL-E, and Midjourney?

All three are AI programs that can create almost any image from a text prompt. The biggest difference is that only Stable Diffusion is completely free and open source. You can run it on your computer without paying anything, and anyone can learn from and improve the Stable Diffusion code. The fact that you need to install it yourself makes it harder to use, though.

DALL-E and Midjourney are both closed source. DALL-E can be accessed primarily via its website and offers a limited number of image generations per month before asking you to pay. Midjourney can be accessed primarily via commands on its Discord server and has different subscription tiers.

What is a model in Stable Diffusion?

A model is a file representing an AI algorithm trained on specific images and keywords. Different models are better at creating different types of images – you may have a model good at creating realistic people, another that’s good at creating 2D cartoon characters, and yet another that’s best for creating landscape paintings.

The Deliberate model we installed in this guide is a popular model that’s good for most images, but you can check out all kinds of models on websites like Civitai or Hugging Face. As long as you download a .safetensors file, you can import it to the AUTOMATIC1111 Web UI using the same instructions in this guide.

What is the difference between SafeTensor and PickleTensor?

In short, always use SafeTensor to protect your computer from security threats.

While both SafeTensor and PickleTensor are file formats used to store models for Stable Diffusion, PickleTensor is the older and less secure format. A PickleTensor model can execute arbitrary code (including malware) on your system.

Should I use the batch size or batch count setting?

You can use both. A batch is a group of images that are generated in parallel. The batch size setting controls how many images there are in a single batch. The batch count setting controls how many batches get run in a single generation; each batch runs sequentially.

If you have a batch count of 2 and a batch size of 4, you will generate two batches and a total of eight images.

If you prefer drawing things yourself, check out our list of sketching apps for Windows.

Image credit: Pixabay. All screenshots by Brandon Li.

Brandon Li

Brandon Li is a technology enthusiast with experience in the software development industry. As a result, he has a lot of knowledge about computers and is passionate about sharing that knowledge with other people. While he has mainly used Windows since early childhood, he also has years of experience working with other major operating systems.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

The Definitive Guide To Battery Charging

Mobile devices are becoming the norm when it comes to personal technology these days, which means that most of the technology you use every day contains some sort of battery. That means you need to charge it up when the juice runs out, but do you really know the right way to do it? 

Many people seem to be in doubt when it comes to battery charging. There are plenty of myths and downright poor practices floating around, so we’ve decided to throw together a definitive guide to battery charging, so you can spend less time worrying about your batteries and more time enjoying your gadgets.

Table of Contents

Battery Charging Chemistry

One of the most important things you need to know about batteries is that there are drastically different ways of making one. All batteries use chemicals to store electrical energy, but the specific chemistry at work determines what that battery’s characteristics are. 

For example, nickel-cadmium batteries can be charged relatively quickly, but suffer from the so-called “memory effect” where capacity seems to diminish if the battery isn’t completely discharged before recharging. Nickel metal hydride batteries have a higher capacity than nickel cadmium, but are sensitive to overcharging and can’t stand up to as many charge cycles.

For most modern electronics, the battery chemistry of choice is lithium ion. Specifically lithium polymer batteries. These batteries have the highest power to weight ratio, which makes them perfect for mobile phones, laptops and drones. This article is primarily about lithium ion batteries, because they are so common now. 

Battery Lifespan

Lithium polymer batteries have almost none of the drawbacks previous popular battery types have. There’s no memory effect, they charge pretty quickly these days and are very affordable. However, they do wear out every time you complete a full charge-discharge cycle. Each battery is rated for a certain number of these cycles, after which its maximum capacity starts to decline. Eventually the battery won’t hold a useful amount of charge and will have to be replaced.

These days, devices like phones, tablets and even some laptops don’t have batteries that can be removed. So replacing them usually requires an expensive visit to an authorized dealer.

The good news is that you can extend the useful life of a battery in a variety of ways. Check out our detailed guide on how to preserve your battery and keep it from needing replaced sooner than necessary.

A lot of this has to do with charging habits, such as allowing lithium batteries to discharge to 50% once or twice a month or taking certain devices off AC power once full. It’s a little more nuanced than that however, so be sure to give the aforementioned article a look if battery longevity is an issue that concerns you.

Using The Right Charger

Lithium batteries are actually pretty volatile, which is why regulations require that they have sophisticated protections to prevent flame outs, explosions and other dangerous events happening.

You may recall that imported electric scooters were responsible for burning down several people’s homes a few years ago. That’s because these devices lacked the safety features mandated by European and US authorities. So the lithium batteries inside received improper charging, causing a runaway reaction.

This is why it’s very important to only use battery charging equipment that conforms to the safety standards of the EU, the USA or the territory in which you live. Do not buy or use chargers or batteries that are not certified in this way. While devices such as smartphones themselves have safety features to prevent these types of catastrophic failures, they rely at least partly on the attached charger 

While safety is an important reason to use the correct charger, the other reason to match the right charger to your device is charging speed. Different devices may have different fast-charging standards. So if you use a charger and phone with mismatched fast charging standards, they’ll fall back to the standard lowest common denominator.

USB has a safe, but very slow basic charging speed. Qualcomm has “Quick Charge”, Samsung has “Adaptive Fast Charging” and USB 3.1 over USB-C has “Power Delivery”.

Most modern chargers support multiple fast charging modes, so it’s likely at least one of them will work with your device. However in almost all cases you’ll get the best results with a charger from the same manufacturer as the device.

Some power banks, such as this Romoss 30+ model support just about every connection type and both Quick Charge and USB-C Power Delivery. It can also fast charge itself, which makes a huge difference with a bank that large. 

Incidentally, if you want to know more about power banks, check out our detailed article on these handy portable power bricks.

Software Battery Charge Controls

Modern devices that contain lithium batteries such as smartphones or laptops usually have sophisticated battery charging software that helps manage the health of those batteries. They monitor the temperature and voltages, keeps detailed records of the battery history, and controls the charge level based on what the device is being used for.

For example, even if your phone shows that it’s at 100% charge, the truth is probably somewhere slightly lower than this. Because lithium batteries degrade more quickly if they are constantly kept at 100% capacity, the phone will discharge a little if left plugged in overnight, to prevent stress to the battery. 

The latest macOS devices also have this feature. If you mainly use your MacBook plugged in, the battery will discharge to 90% and stay there, drastically increasing the lifespan of the battery.

Long-term Battery Storage

This brings up another issue with battery charging: device storage. Lithium batteries will discharge at a slow rate all by themselves sitting on a shelf. If you leave them to drain completely, the battery may become permanently unusable. However, charging them to 100% and then storing them isn’t a great idea either, for the same reasons we just discussed above.

We can take a lesson from “intelligent” batteries such as those found in DJI’s drones. These batteries time how long it’s been since they’ve been used. Leave them on the shelf for too long and they’ll self discharge to about 60% capacity and then try to maintain that.

If you’re going to put a phone or other lithium device away over the long term, charge it up to around 60% before putting it away. Then check once a month to ensure the battery hasn’t gone below 30%. If it gets close to that figure charge it back up to 60%. This way the battery should still be fine when you need to use it again.

Lithium Battery Revival

Lithium batteries have a protection circuit in them that will put the battery to sleep if it gets discharged too much. In some cases it’s possible to bring these batteries back to life by using special chargers that have a “boost” mode.

This is not always successful and if the battery has been over-discharged for too long it can be dangerous to attempt this. If you have a battery that can’t simply be replaced, we recommend taking it to a specialist for an attempted revival.

Battery Charging Safety

As we noted earlier, Lithium batteries are pretty volatile. While modern lithium batteries have many safety features built in, they do still fail. One of the most sensitive times is during charging, so you need to be extra vigilant when juicing up your lithium-powered device.

Never charge a device with a puffy, swollen battery. While a bit of heat is normal when charging a lithium battery, a very hot device could be a sign of imminent failure.

Think carefully about where you charge your devices. Are they close to other objects that could burn up easily? It’s better to charge lithium devices in a designated area where battery failure can be contained. If you’re really concerned, consider getting a Lipo Guard. You can place charging devices or batteries inside it and, should they fail, the explosion and flame is contained within the special materials the bag is made of.

Replacement Batteries

No matter how well you treat your batteries, they will eventually need a replacement. Whether you do this yourself or have a professional handle the installation, be very careful of the batteries you choose. There are many counterfeit batteries or poor quality unauthorized replacement batteries on the market. 

A Guide To Protesting During A Pandemic

Protesting has always been risky business. The current situation in the US, with people in all 50 states standing up to racism and police brutality, is no exception. But since we’re still in the middle of a pandemic, these demonstrations have an extra layer of risk.

If you want to support the cause from home, you can help by donating to organizations and raising awareness of the issues that ignited the protests in the first place. But if you want to be a part of the movement by taking to the streets, there are ways to minimize your chances of catching COVID-19.

It’s impossible to be completely safe from infection. But being careful and prepared will allow you to freely exercise your First Amendment rights in the safest way possible, while also protecting your community from the novel coronavirus.

The basics: time and place

At a protest, the two most important factors you’ll be dealing with are how long you stay and whether the demonstration is in an open or closed space. It’s all about managing risk.

The longer you stay, the more likely you are to be infected if there’s someone with the disease in attendance. Closed premises—such as an auditorium or a subway platform—are worse than open-air locations, since they don’t allow proper air circulation.

If the protest you want to go to is indoors, you may want to skip it. And since there’s no particular amount of time that makes it safe to attend any large gathering (no matter where it is), plan the duration of your stay beforehand and stick to it.

Keep doing what you’re doing—and then some

No matter where you go during this pandemic, you should be taking precautions to protect yourself and others. These include wearing a mask at all times and either washing your hands when you touch something or someone, or wearing disposable surgical gloves.

At a protest, you should follow these rules even more rigorously. Without a mask, your chants for justice or inspiring speech will spray drops of saliva onto whatever’s nearby, including people’s skin and faces. Yes, this is gross, but because many COVID-19 carriers don’t know they’re infected—and you may be one of them—it also increases the likelihood of spreading the disease.

In the event you come into contact with tear gas—even a small amount—your body’s natural response will be to get rid of it by coughing and sneezing. This, in turn, will send droplets from your mouth and nose into the open air—unless you’re wearing a mask.

A face mask will filter out most tear gas particles, but even if you do inhale some, you should not take it off—make sure you move to safety first. When you’re away from the gas, uncover your face and wash it with copious amounts of water or a baking soda and water solution before you put a clean mask on. For more information on how to deal with tear gas, check out our complete guide.

“I’d also recommend wearing goggles,” says Rohini Haar, an emergency physician and a research fellow at the Human Rights Center at the University of California, Berkeley. “That way you’ll be protecting all the mucous membranes of your face, like your eyes, mouth, and nose.”

Even glasses might help to some extent, though they’ll only protect you from particles coming straight at you. Everything coming from above or the side can easily make its way into your eyes.

Wearing gloves can be helpful too, though you can forgo them if you are thorough with washing or sanitizing your hands. “As long as you have a bottle of hand sanitizer with you and use it whenever you come into contact with a person or a surface, you should be fine,” says Crystal Watson, an assistant professor at John Hopkins’ Bloomberg School of Public Health, and an expert in contact tracing.

A crowd is a crowd is a crowd

We know it’s fun to be in the middle of a protesting crowd, but to prevent infection, you’ll want to keep your distance. Derick McKinney / Unsplash

There’s no way around it—by definition, the very objective of a protest is to attract a crowd to raise awareness of a particular issue. But standing in the middle of a large group of people is the exact opposite of what you should do if you want to protect others and avoid getting infected during a pandemic.

You don’t actually have to be in the middle of a dense gathering to participate in a protest, though. Even in this kind of event, we should strive to keep that 6-foot distance from others as much as possible. Staying in the outskirts of a protest will make distancing easier, and will also help you move to safety more quickly if violence ensues.

Nevertheless, it’s important to know that even though you may intend to keep your distance, your ability to do so will be affected by a number of factors. These include the particular topography and layout of the protest location, and the position of police forces and other law enforcement officials.

If you’re arrested and moved into a closed and people-dense environment, such as a bus or a jail cell, or if you’re forced to move to an area where social distancing is not possible, keep in mind that your safety should be your first concern. If you can, move to a more open space, and if you can’t, try your best to keep your distance and your mask on.

Law enforcement could ask you to remove your mask, even if anti-mask laws have been suspended in several states due to the pandemic. If you ever face such a situation, weigh your risk and remember safety should be your priority—you may be able to avoid excessive violence by cooperating.

Things get a little more complicated if you have a higher risk of developing a severe case of COVID-19—that is, if you have any sort of immunodeficiency or underlying conditions.

“In that case, it might be good to protest from a further distance,” says Watson. She recommends that at-risk demonstrators hold their signs from the inside of their cars or bang pots and pans from the windows of their houses or apartments.

True—this is far from ideal and certainly not what comes to mind when you think of a protest. But we should not forget that by taking care of ourselves, we’re also caring for our community by not putting even more pressure on our healthcare system.

When to opt for a stay-at-home protest

Your right to protest is sacred—it’s protected both by the US Constitution and the Universal Declaration of Human Rights. No one should tell you when you can or cannot protest, but there are some circumstances in which you should ask yourself if going out during a pandemic is the best decision for you and your community.

If you have been at high risk of infection lately, suspect you might be infected, or have any symptoms, it’s a good idea to stay home. Healthcare professionals have already voiced their concerns about how the nationwide demonstrations will affect the spread of COVID-19. Some of them—including Watson and Haar—agree that a second surge of infections is highly likely.

“It’s a difficult situation, because I understand the sentiment and the need to protest,” says Watson. “But I do expect there will be some increase in transmissions, though we don’t really know how big it could be.”

We need to be as careful as possible during this time, because all medical professionals can do is monitor the situation closely and act quickly when new cases are reported. This is why contact tracing is so important. “If someone is infected, we can trace the people they were with and ask them to stay at home and quarantine,” Watson says. “Then we can stun any surge that occurs.”

Contact tracing: how to help and stay safe

Sticking to your group of friends and staying away from the big crowd will make contact tracing a lot easier. Hayley Catherine / Unsplash

Contact tracing has been used in public health for years to help stop the spread of contagious diseases. The process identifies people who might have been exposed, alerts them of their situation, and asks them to take the necessary measures to prevent further transmission. This tool has been used to control the spread of Ebola, sexually transmitted diseases, SARS, and other contagions. Multiple states are now gearing up to use it against COVID-19.

Since it’s highly unlikely you’ll know everybody at a protest, Watson recommends that you stay within your own group and try not to make contact with other people. If there’s an infection, that will make it much easier to trace.

But when protesters have been subjected to excessive force by the police and other authorities across the country, it’s normal to be wary if someone asks you for information about yourself or the people you were with at a protest. There are ways to tell the difference between public health workers and people with other agendas, though.

Most importantly, contact tracers will not address you in a public setting. “If someone is approaching you at a protest and saying that they are contact tracing, that is not real—you should not engage with them,” Watson says.

Contact-tracing professionals will only approach you through a phone call or a text message, and they should properly identify themselves as officials from your local health department before they ask anything, Watson explains. You also have the right to question their identity and ask for credentials, and they should never ask for sensitive information such as financial records or your social security number.

Always remember that protesting is your inalienable right, and you should exercise it whenever you feel like it. On the other hand, don’t feel pressured to go out if you don’t feel safe or comfortable doing so. There are a lot of ways to help the causes you care about—you can sign petitions or donate money to civil organizations. Some of the things you can do, like streaming a playlist on YouTube, don’t even require you to leave your seat—let alone your home.

A Beginner’s Guide Bayesian Inference

This article was published as a part of the Data Science Blogathon.

Introduction

Classical

Frequentist

Bayesian

Let’s understand the differences among these 3 approaches with the help of a simple example.

Suppose we’re rolling a fair six-sided die and we want to ask what is the probability that the die shows a four? Under the Classical framework, all the possible outcomes are equally likely i.e., they have equal probabilities or chances. Hence, answering the above question, there are six possible outcomes and they are all equally likely. So, the probability of a four on a fair six-sided die is just 1/6. This Classical approach works well when we have well-defined equally likely outcomes. But when things get a little subjective then it may become a little complex.

On the other hand, Frequentist definition requires us to have a hypothetical infinite sequence of a particular event and then to look at the relevant frequency in that hypothetical infinite sequence. In the case of rolling a fair six-sided die, if we roll it for the infinite number of times then 1/6th of the time, we will get a four and hence, the probability of rolling four in a six-sided die will be 1/6 under frequentist definition as well.

Now if we proceed a little further and ask if our die is fair or not. Under frequentist paradigm, the probability is either zero when it’s not a fair die and one if it is a fair die because under frequentist approach everything is measured from a physical perspective and hence, the die can be either fair or not. We cannot assign a probability to the fairness of the die. Frequentists are very objective in how they define probabilities but their approach cannot give intuitive answers for some of the deeper subjective issues.

Bayesian perspective allows us to incorporate personal belief/opinion into the decision-making process. It takes into account what we already know about a particular problem even before any empirical evidence. Here we also have to acknowledge the fact my personal belief about a certain event may be different than others and hence, the outcome that we will get using the Bayesian approach may also be different.

For example, I may say that there is a 90% probability that it will rain tomorrow whereas my friend may say I think there is a 60% chance that it will rain tomorrow. So inherently Bayesian perspective is a subjective approach to probability, but it gives more intuitive results in a mathematically rigorous framework than the Frequentist approach. Let’s discuss this in detail in the following sections.

What is Bayes’ Theorem?

Simplistically, Bayes’ theorem can be expressed through the following mathematical equation

Now let’s focus on the 3 components of the Bayes’ theorem

• Prior

• Likelihood

• Posterior

• Prior Distribution – This is the key factor in Bayesian inference which allows us to incorporate our personal beliefs or own judgements into the decision-making process through a mathematical representation. Mathematically speaking, to express our beliefs about an unknown parameter θ we choose a distribution function called the prior distribution. This distribution is chosen before we see any data or run any experiment.

How do we choose a prior? Theoretically, we define a cumulative distribution function for the unknown parameter θ. In basic context, events with the prior probability of zero will have the posterior probability of zero and events with the prior probability of one, will have the posterior probability of one. Hence, a good Bayesian framework will not assign a point estimate like 0 or 1 to any event that has already occurred or already known not to occur. A very handy widely used technique of choosing priors is using a family of distribution functions that is sufficiently flexible such that a member of the family will represent our beliefs. Now let’s understand this concept a little better.

i. Conjugate Priors – Conjugacy occurs when the final posterior distribution belongs to the family of similar probability density functions as the prior belief but with new parameter values which have been updated to reflect new evidence/ information. Examples Beta-Binomial, Gamma -Poisson or Normal-Normal.

ii. Non-conjugate Priors –Now, it is also quite possible that the personal belief cannot be expressed in terms of a suitable conjugate prior and for those cases simulation tools are applied to approximate the posterior distribution. An example can be Gibbs sampler.

iii. Un-informative prior – Another approach is to minimize the amount of information that goes into the prior function to reduce the bias. This is an attempt to have the data have maximum influence on the posterior. These priors are known as uninformative Priors but for these cases, the results might be pretty similar to the frequentist approach.

• Likelihood – Suppose θ is the unknown parameter that we are trying to estimate. Let’s represent fairness of a coin with θ. Now to check the fairness, we are flipping a coin infinitely and each time it is either appearing as ‘head’ or ‘tail’ and we are assigning a 1 or 0 value accordingly. This is known as the Bernoulli Trials. Probability of all the outcomes or ‘X’s taking some value of x given a value of theta. We’re viewing each of these outcomes as independent and hence, we can write this in product notation. This is the probability of observing the actual data that we collected (head or tail), conditioned on a value of the parameter theta (fairness of coin) and can be expressed as follows-

This is the concept of likelihood which is the density function thought of as a function of theta. To maximize the likelihood i.e., to make the event most likely to occur for the data we have, we will choose the theta that will give us the largest value of the likelihood. This is referred to as the maximum likelihood estimate or MLE. Additionally, a quick reminder is that the generalization of the Bernoulli when we have N repeated and independent trials is a binomial. We will see the application later in the article.

Mechanism of Bayesian Inference:

The Bayesian approach treats probability as a degree of beliefs about certain event given the available evidence. In Bayesian Learning, Theta is assumed to be a random variable. Let’s understand the Bayesian inference mechanism a little better with an example.

Inference example using Frequentist vs Bayesian approach: Suppose my friend challenged me to take part in a bet where I need to predict if a particular coin is fair or not. She told me “Well; this coin turned up ‘Head’ 70% of the time when I flipped it several times. Now I am giving you a chance to flip the coin 5 times and then you have to place your bet.” Now I flipped the coin 5 times and Head came up twice and tail came up thrice. At first, I thought like a frequentist.

So, θ is an unknown parameter which is a representation of fairness of the coin and can be defined as

θ = {fair, loaded}

Additionally, I assumed that the outcome variable X (whether head or tail) follows Binomial distribution with the following functional representation

Now in our case n=5.

Now my likelihood function will be

Now, I saw that head came up twice, so my X =2.

= 0.13 if θ =loaded

Therefore, using the frequentist approach I can conclude that maximum likelihood i.e., MLE (theta hat) = fair.

Now comes the tricky part. If the question comes how sure am I about my prediction? I will not be able to answer that question perfectly or correctly as in a frequentist world, a coin is a physical object and hence, my probability can be either 0 or 1 i.e., the coin is either fair or not.

Therefore, my prior P(loaded)=0.9. I can now update my prior belief with data and get the posterior probability using Bayes’ Theorem.

My numerator calculation will be as follows-

The denominator is a constant and can be calculated as the expression below. Please note that we are here basically summing up the expression over all possible values of θ which is only 2 in this case i.e., fair or loaded.

Hence, after replacing X with 2 we can calculate the Bayesian probability of the coin being loaded or fair. Do it yourself and let me know your answer! However, you will realize that this conclusion contains more information to make a bet than the frequentist approach.

Application of Bayesian Inference in financial risk modeling:

Bayesian inference has found its application in various widely used algorithms e.g., regression, Random Forest, neural networks, etc. Apart from that, it also gained popularity in several Bank’s Operational Risk Modelling. Bank’s operation loss data typically shows some loss events with low frequency but high severity. For these typical low-frequency cases, Bayesian inference turns out to be useful as it does not require a lot of data.

Earlier, Frequentist methods were used for operational risk models but due to its inability to infer about the parameter uncertainty, Bayesian inference was considered to be more informative as it has the capacity of combining expert opinion with actual data to derive the posterior distributions of the severity and frequency distribution parameters. Generally, for this type of statistical modeling, the bank’s internal loss data is divided into several buckets and the frequencies of each bucket loss are determined by expert judgment and then fitted into probability distributions.

Hello! I am Ananya. I have a degree in Economics and I have been working as a financial risk analyst for the last 5 years. I am also a voracious reader of Data Science Blogs just as you are. This is my first article for Analytics Vidhya. Hope you found this article useful.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Update the detailed information about Stable Diffusion Prompt: A Definitive Guide on the Achiashop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!