Trending December 2023 # Using Aws S3 With Python Boto3 # Suggested January 2024 # Top 14 Popular

You are reading the article Using Aws S3 With Python Boto3 updated in December 2023 on the website Achiashop.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Using Aws S3 With Python Boto3

This article was published as a part of the Data Science Blogathon.

Introduction

AWS S3 is one of the obje ve files quickly and securely from anywhere. Users can combine S3 with other services to build numerous luding S3, and use the resources from within AWS. It helps developers to create, configure, and manage AWS services, making it easy to integrate with Python applications, libraries, or scripts. This article covers how boto3 works and how it helps interact with S3 op ions such as creating, listing, and deleting buckets and objects.

What is boto3

Boto3 is a Python SDK or library that can manage and access various services of AWS, such as Amazon S3, EC2, Dynamo DB, SQS, Cloudwatch, etc., through python scripts. Boto3 has a data-driven approach for generating classes at runtime from JSON description files shared between SDKs. Because Boto 3 is generated from these shared JSON files, users get fast updates to the latest services and a consistent API across services. It provides object-oriented and easy-to-use API as well as low-level direct service access.

Key Features of boto3

It is built on top of botocore- a Python library used to send API requests to AWS and receive responses from the service.

Supports Python 2.7+ and 3.4+ natively.

Boto3 provides sessions and per-session credentials & configuration, along with essential components like authentication, parameter, and response handling.

Has a consistent and Up-to-date Interface

Working with AWS S3 and Boto3

Using the Boto3 library or SDK with Amazon S3 allows users to create, delete, and update S3 Buckets, Objects, S3 Bucket Policies, etc., from Python programs or scripts in a faster way. Boto3 has two abstractions, namely client and resource. Users can choose straction if they want to work with single S3 files or resource abstraction if they want to work with multiple S3 buckets. Clients provide a low-level interface to the AWS services, whereas resources are higher-level abstraction than clients.

Installation of boto3 and Building AWS S3 Client

Installing boto3 to your application:

On the Terminal, use the code

pip list

The above code will list the installed packages. If Boto3 is not installed, install it by the following code.

pip install boto3

Build an S3 client to access the service methods:

Create an S3 client that helps access the objects stored in the S3 environment and set credentials, including aws_access_key_id and aws_secret_access_key. It is essential to have credentials such as Access Key and Secret Key to access the S3 bucket and to run the following code.

# Import the necessary packages import boto3 # Now, build a client S3 = boto3.client( 's3', aws_access_key_id = 'enter your_aws_access_key_id ', aws_secret_access_key = ' enter your_aws_secret_access_key ', region_name = ' enter your_aws_region_name ' )   AWS S3 Operations With boto3

Creating buckets:

To create an S3 bucket, use the create_bucket() method with the Bucket and ACL parameters. ACL represents Access Control List which manages access to S3 buckets and objects. It is important to note that Bucket names should be unique throughout the whole AWS platform.

my_bucket = "enter your s3 bucket name that has to be created" bucket = s3.create_bucket( ACL='private', Bucket= my_bucket )

Listing buckets:

To list all the available buckets, use the list_buckets() method.

bucket_response = s3.list_buckets() # Output the bucket names print('Existing buckets are:') for bucket in bucket_response ['Buckets']: print(f' {bucket["Name"]}')

Deleting Buckets:

A bucket in S3 can be deleted using the delete_bucket() method. The bucket must be empty, meaning it does not contain any objects to perform the deletion.

my_bucket = "enter your s3 bucket name that has to be deleted" response = s3.delete_bucket(Bucket= my_bucket) print("Bucket has been deleted successfully !!!")

Listing the files from a bucket:

Files or objects from an S3 bucket can be listed using the list_objects method or the list_objects_v2 method.

my_bucket = "enter your s3 bucket name from which objects or files has to be listed out" response = s3.list_objects(Bucket= my_bucket, MaxKeys=10, Preffix="only_files_starting_with_this_string")

MaxKeys argument represents the maximum number of objects to be listed. The prefix argument lists Objects whose keys (names) only start with a specific prefix.

Another way to list objects:

s3 = boto3.client("s3") my_bucket = " enter your s3 bucket name from which objects or files has to be listed out " response = s3.list_objects_v2(Bucket=my_bucket) files = response.get("Contents") for file in files: print(f"file_name: {file['Key']}, size: {file['Size']}")

Uploading files:

To upload a file to an s3 bucket, use the method upload_file () having the following parameters:

File: it defines the path of the file to be uploaded

Key: it represents the unique identifier for an object within a bucket

Bucket: bucket name to which file has to be uploaded

my_bucket = "enter your bucket name to which files has to be uploaded" file_name = "enter your file path name to be uploaded" key_name = "enter unique identifier" s3.upload_file(Filename= file_name, Bucket= my_bucket, Key= key_name)

Downloading files:

To download a file or object locally from a bucket, use the download_file() method with Key, Bucket, and Filename parameters.

my_bucket = "enter your s3 bucket name from which object or files has to be downloaded" file_name = "enter file to be downloaded" key_name = "enter unique identifier" s3.download_file(Filename= file_name, Bucket= my_bucket, Key= key_name)

Deleting files:

To delete a file or object from a bucket, use the delete_object() method with Key and Bucket parameters.

my_bucket = "enter your s3 bucket name from which objects or files has to be deleted" key_name = "enter unique identifier" s3.delete_object(Bucket= my_bucket, Key= key_name)

Get the object’s metadata:

To get the file or object’s details, such as last modification time, storage class, content length, size in bytes, etc., use the head_object() method with Key and Bucket parameters.

my_bucket = "enter your s3 bucket name from which objects or file's metadata has to be obtained" key_name = "enter unique identifier" response = s3.head_object(Bucket= my_bucket, Key= key_name) Conclusion

AWS S3 is one of the most reliable, flexible, and durable object storage systems that allows users to store and retrieve data. AWS defines boto3 as a Python library or SDK (Software Development Kit) to create, manage and configure AWS services, including S3. The boto3 operates AWS services in a programmatic way from your applications and services.

Key Takeaways:

AWS S3 is one object storage service that helps store and retrieve files quickly.

Boto3 is a Python SDK or library that can manage Amazon S3, EC2, Dynamo DB, SQS, Cloudwatch, etc.

Boto3 clients provide a low-level interface to the AWS services, whereas resources are a higher-level abstraction than clients.

Using the Boto3 library with Amazon S3 allows users to create, list, delete, and update S3 Buckets, Objects, S3 Bucket Policies, etc., from Python programs or scripts in a faster way.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

You're reading Using Aws S3 With Python Boto3

Unbrick Verizon Galaxy S3 Using Odin

The Verizon Galaxy S3 has to be the most problematic variant of the Galaxy S3. If the locked bootloader on it wasn’t enough, many users have seen their Verizon Galaxy S3 get bricked and stop booting after trying to flash a custom ROM, making it a very expensive paperweight. Normally, the only way to fix this is to send the phone it to Verizon, and probably also pay the full price of the phone for a replacement.

However, thanks to XDA Recognized Developer PureMotive, we can now fix a bricked Verizon Galaxy S3 and get it running again just like new. The procedure involves flashing a few files that fix the device’s bootloader and make it work normally again, and the guide below will walk you through it with step-by-step-instructions.

Something important to note is that the procedure is quite long, and if not followed properly, it will not work. So read instruction carefully, twice if you have to, and your phone will be fixed and running again in no time.

Now let’s take a look at how the Verizon Galaxy S3 can be unbricked.

How to Unbrick Verizon Galaxy S3

Download Drivers

Extract the chúng tôi file to a convenient location on the computer to get a folder named Odin307 with 4 files inside.

Download the following files:

VRALEC.bootchain.tar.md5

BOOTLOADER_I535VRALF2_618049_REV09_user_low_ship.tar.md5

Open this file in a torrent application like µTorrent to download a file named stock.vzw_root66.7z. After downloading the stock.vzw_root66.7z file, extract it to get a file named stock.vzw_root66.tar (use a software like 7-zip to extract).

Make sure phone is off. Boot into download mode. To do so, hold down the Volume Down, Home and then the Power buttons together until a Warning!! message is displayed on the screen. Here, press Volume Up to enter download mode. A green Android and the text Downloading will be displayed on the screen.

Then, connect the phone to the computer with the USB cable and wait for Windows to finish installing drivers. Odin will say Added!! in the message box on the bottom left if the phone is detected successfully. If not, make sure the drivers are installed and also try using a different USB port – preferably a USB port on the back if using a desktop computer.

Wait till flashing is complete and you get a PASS message in Odin. When that happens, disconnect the phone from the computer, but DON’T turn it off and let it stay in download mode. Also close Odin.

Reconnect your phone to the computer (while it is in download mode). Open Odin again.

When flashing is complete and you get a PASS message, disconnect the phone and close Odin. But keep the phone in download mode, don’t turn it off.

Open Odin again and also reconnect the phone to the computer.

The phone will still not boot up completely. Now, remove the battery on the phone, then re-insert it. Don’t turn it on.

In recovery, use the volume buttons to scroll up/down and the home button to select options.

Select wipe data/factory reset, then select Yes on next screen to confirm. Wait a while till the data wipe is complete (this will NOT delete your personal files on the SD cards).

Select wipe cache, confirm, then wait for cache wipe to be completed.

Then, select reboot system now to reboot the phone.

The phone will now boot up properly into Android and is fixed, and you will be able to use it now.

Your Verizon Galaxy S3 is now fixed, and has saved you from sending it in for repair to Verizon. Do let us know how the procedure works!

Pneumonia Detection Using Cnn With Implementation In Python

Hey there! Just finished another deep learning project several hours ago, now I want to share what I actually did there. So the objective of this challenge is to determine whether a person suffers pneumonia or not. If yes, then determine whether it’s caused by bacteria or viruses. Well, I think this project should be called classification instead of detection.

Several x-ray images in the dataset used in this project.

In other words, this task is going to be a multiclass classification problem where the label names are: normal, virus, and bacteria. In order to solve this problem. I will use CNN (Convolutional Neural Network), thanks to its excellent ability to perform image classification. Not only that, but here I also implement the image augmentation technique as an approach to improve model performance. By the way, here I obtained 80% accuracy on test data which is pretty impressive to me.

The dataset used in this project can be downloaded from this Kaggle link. The size of the entire dataset itself is around 1 GB, so it might take a while to download. Or, we can also directly create a Kaggle Notebook and code the entire project there, so we don’t even need to download anything. Next, if you explore the dataset folder, you will see that there are 3 subfolders, namely train, test and val.

Well, I think those folder names are self-explanatory. In addition, the data in the train folder consists of 1341, 1345, and 2530 samples for normal, virus and bacteria class respectively. I think that’s all for the intro, let’s now jump into the code!

Note: I put the entire code used in this project at the end of this article.

Loading modules and train images

The very first thing to do when working with a computer vision project is to load all required modules and the image data itself. I use tqdm module to display the progress bar which you’ll see why it is useful later on. The last import I do here is ImageDataGenerator coming from the Keras module. This module is going to help us with implementing the image augmentation technique during the training process.

import os import cv2 import pickle import numpy as np import matplotlib.pyplot as plt import seaborn as sns from tqdm import tqdm from sklearn.preprocessing import OneHotEncoder from sklearn.metrics import confusion_matrix from keras.models import Model, load_model from keras.layers import Dense, Input, Conv2D, MaxPool2D, Flatten from keras.preprocessing.image import ImageDataGenerator

np.random.seed(22)

Next, I define two functions to load image data from each folder. The two functions below might look identical at glance, but there’s actually a little difference at the line with bold text. This is done because the filename structure in NORMAL and PNEUMONIA folders are slightly different. Despite the difference, the other process done by both functions is essentially the same. First, all images are going to be resized to 200 by 200 pixels large.

This is important to do since the images in all folders are having different dimensions while the neural networks can only accept data with a fixed array size. Next, basically all images are stored with 3 color channels, which is I think it’s just redundant for x-ray images. So the idea here is to convert all those color images to grayscale.

# Do not forget to include the last slash def load_normal(norm_path): norm_files = np.array(os.listdir(norm_path)) norm_labels = np.array(['normal']*len(norm_files)) norm_images = [] for image in tqdm(norm_files): image = cv2.imread(norm_path + image) image = cv2.resize(image, dsize=(200,200)) image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) norm_images.append(image) norm_images = np.array(norm_images) return norm_images, norm_labels

def load_pneumonia(pneu_path): pneu_files = np.array(os.listdir(pneu_path)) pneu_labels = np.array([pneu_file.split('_')[1] for pneu_file in pneu_files]) pneu_images = [] for image in tqdm(pneu_files): image = cv2.imread(pneu_path + image) image = cv2.resize(image, dsize=(200,200)) image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) pneu_images.append(image) pneu_images = np.array(pneu_images) return pneu_images, pneu_labels

As the two functions above have been declared, now we can just use it to load train data. If you run the code below you’ll also see why I choose to implement tqdm module in this project.

norm_images, norm_labels = load_normal('/kaggle/input/chest-xray-pneumonia/chest_xray/train/NORMAL/')

pneu_images, pneu_labels = load_pneumonia('/kaggle/input/chest-xray-pneumonia/chest_xray/train/PNEUMONIA/')

The progress bar displayed using tqdm module.

Up to this point, we already got several arrays: norm_images, norm_labels, pneu_images, and pneu_labels. The one with _images suffix indicates that it contains the preprocessed images while the array with _labels suffix shows that it stores all ground truths (a.k.a. labels). In other words, both norm_images and pneu_images are going to be our X data while the rest is going to be y data. To make things look more straightforward, I concatenate the values of those arrays and store in X_train and y_train array.

X_train = np.append(norm_images, pneu_images, axis=0) y_train = np.append(norm_labels, pneu_labels)

The shape of the features (X) and labels (y).

By the way, I obtain the number of images of each class using the following code:

Finding out the number of unique values in our training set

Data Analysis Using Python Pandas

In this tutorial, we are going to see the data analysis using Python pandas library. The library pandas are written in C. So, we don’t get any problem with speed. It is famous for data analysis. We have two types of data storage structures in pandas. They are Series and DataFrame. Let’s see one by one.

1.Series

Series is a 1D array with customized index and values. We can create a Series object using the pandas.Series(data, index) class. Series will take integers, lists, dictionaries as data. Let’s see some examples.

Example # importing the pandas library import pandas as pd # data data = [1, 2, 3] # creating Series object # Series automatically takes the default index series = pd.Series(data) print(series) Output

If you run the above program, you will get the following result.

0 1 1 2 2 3 dtype: int64

How to have a customized index? See the example.

Example # importing the pandas library import pandas as pd # data data = [1, 2, 3] # index index = ['a', 'b', 'c'] # creating Series object series = pd.Series(data, index) print(series) Output

If you run the above program, you will get the following result.

a 1 b 2 c 3 dtype: int64

When we give the data as a dictionary to the Series class, then it takes keys as index and values as actual data. Let’s see one example.

Example # importing the pandas library import pandas as pd # data data = {'a':97, 'b':98, 'c':99} # creating Series object series = pd.Series(data) print(series) Output

If you run the above program, you will get the following results.

a 97 b 98 c 99 dtype: int64

We can access the data from the Series using an index. Let’s see the examples.

Example # importing the pandas library import pandas as pd # data data = {'a':97, 'b':98, 'c':99} # creating Series object series = pd.Series(data) # accessing the data from the Series using indexes print(series['a'], series['b'], series['c']) Output

If you run the above code, you will get the following results.

97 98 99 2.Pandas

We have how to use Series class in pandas. Let’s see how to use the DataFrame class. DataFrame data structure class in pandas that contains rows and columns.

We can create DataFrame objects using lists, dictionaries, Series, etc.., Let’s create the DataFrame using lists.

Example # importing the pandas library import pandas as pd # lists names = ['Tutorialspoint', 'Mohit', 'Sharma'] ages = [25, 32, 21] # creating a DataFrame data_frame = pd.DataFrame({'Name': names, 'Age': ages}) # printing the DataFrame print(data_frame) Output

If you run the above program, you will get the following results.

               Name    Age 0    Tutorialspoint    25 1             Mohit    32 2            Sharma    21

Let’s see how to create a data frame object using the Series.

Example # importing the pandas library import pandas as pd # Series _1 = pd.Series([1, 2, 3]) _2 = pd.Series([1, 4, 9]) _3 = pd.Series([1, 8, 27]) # creating a DataFrame data_frame = pd.DataFrame({"a":_1, "b":_2, "c":_3}) # printing the DataFrame print(data_frame) Output

If you run the above code, you will get the following results.

   a  b  c 0  1  1  1 1  2  4  8 2  3  9  27

We can access the data from the DataFrames using the column name. Let’s see one example.

Example # importing the pandas library import pandas as pd # Series _1 = pd.Series([1, 2, 3]) _2 = pd.Series([1, 4, 9]) _3 = pd.Series([1, 8, 27]) # creating a DataFrame data_frame = pd.DataFrame({"a":_1, "b":_2, "c":_3}) # accessing the entire column with name 'a' print(data_frame['a']) Output

If you run the above code, you will get the following results.

0 1 1 2 2 3

How Can Tensorflow Be Used With Estimator To Compile The Model Using Python?

Tensorflow can be used with the estimator to compile the model with the help of the ‘train’ method.

Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

We will use the Keras Sequential API, which is helpful in building a sequential model that is used to work with a plain stack of layers, where every layer has exactly one input tensor and one output tensor.

A neural network that contains at least one layer is known as a convolutional layer. We can use the Convolutional Neural Network to build learning model. 

TensorFlow Text contains collection of text related classes and ops that can be used with TensorFlow 2.0. The TensorFlow Text can be used to preprocess sequence modelling.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

An Estimator is TensorFlow’s high-level representation of a complete model. It is designed for easy scaling and asynchronous training.

The model is trained using iris data set. There are 4 features, and one label.

sepal length

sepal width

petal length

petal width

Example print("The model is being trained") Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. INFO:tensorflow:Calling model_fn. WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/adagrad.py:83: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhg2uvbr/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... INFO:tensorflow:loss = 1.1140382, step = 0 INFO:tensorflow:global_step/sec: 312.415 INFO:tensorflow:loss = 0.8781501, step = 100 (0.321 sec) INFO:tensorflow:global_step/sec: 375.535 INFO:tensorflow:loss = 0.80712265, step = 200 (0.266 sec) INFO:tensorflow:global_step/sec: 372.712 INFO:tensorflow:loss = 0.7615077, step = 300 (0.268 sec) INFO:tensorflow:global_step/sec: 368.782 INFO:tensorflow:loss = 0.733555, step = 400 (0.271 sec) INFO:tensorflow:global_step/sec: 372.689 INFO:tensorflow:loss = 0.6983943, step = 500 (0.268 sec) INFO:tensorflow:global_step/sec: 370.308 INFO:tensorflow:loss = 0.67940104, step = 600 (0.270 sec) INFO:tensorflow:global_step/sec: 373.374 INFO:tensorflow:loss = 0.65386146, step = 700 (0.268 sec) INFO:tensorflow:global_step/sec: 368.335 INFO:tensorflow:loss = 0.63730353, step = 800 (0.272 sec) INFO:tensorflow:global_step/sec: 371.575 INFO:tensorflow:loss = 0.61313766, step = 900 (0.269 sec) INFO:tensorflow:global_step/sec: 371.975 INFO:tensorflow:loss = 0.6123625, step = 1000 (0.269 sec) INFO:tensorflow:global_step/sec: 369.615 INFO:tensorflow:loss = 0.5957534, step = 1100 (0.270 sec) INFO:tensorflow:global_step/sec: 374.054 INFO:tensorflow:loss = 0.57203, step = 1200 (0.267 sec) INFO:tensorflow:global_step/sec: 369.713 INFO:tensorflow:loss = 0.56556034, step = 1300 (0.270 sec) INFO:tensorflow:global_step/sec: 366.202 INFO:tensorflow:loss = 0.547443, step = 1400 (0.273 sec) INFO:tensorflow:global_step/sec: 361.407 INFO:tensorflow:loss = 0.53326523, step = 1500 (0.277 sec) INFO:tensorflow:global_step/sec: 367.461 INFO:tensorflow:loss = 0.51837724, step = 1600 (0.272 sec) INFO:tensorflow:global_step/sec: 364.181 INFO:tensorflow:loss = 0.5281174, step = 1700 (0.275 sec) INFO:tensorflow:global_step/sec: 368.139 INFO:tensorflow:loss = 0.5139683, step = 1800 (0.271 sec) INFO:tensorflow:global_step/sec: 366.277 INFO:tensorflow:loss = 0.51073176, step = 1900 (0.273 sec) INFO:tensorflow:global_step/sec: 366.634 INFO:tensorflow:loss = 0.4949246, step = 2000 (0.273 sec) INFO:tensorflow:global_step/sec: 364.732 INFO:tensorflow:loss = 0.49381495, step = 2100 (0.274 sec) INFO:tensorflow:global_step/sec: 365.006 INFO:tensorflow:loss = 0.48916715, step = 2200 (0.274 sec) INFO:tensorflow:global_step/sec: 366.902 INFO:tensorflow:loss = 0.48790723, step = 2300 (0.273 sec) INFO:tensorflow:global_step/sec: 362.232 INFO:tensorflow:loss = 0.47671652, step = 2400 (0.276 sec) INFO:tensorflow:global_step/sec: 368.592 INFO:tensorflow:loss = 0.47324088, step = 2500 (0.271 sec) INFO:tensorflow:global_step/sec: 371.611 INFO:tensorflow:loss = 0.46822113, step = 2600 (0.269 sec) INFO:tensorflow:global_step/sec: 362.345 INFO:tensorflow:loss = 0.4621966, step = 2700 (0.276 sec) INFO:tensorflow:global_step/sec: 362.788 INFO:tensorflow:loss = 0.47817266, step = 2800 (0.275 sec) INFO:tensorflow:global_step/sec: 368.473 INFO:tensorflow:loss = 0.45853442, step = 2900 (0.271 sec) INFO:tensorflow:global_step/sec: 360.944 INFO:tensorflow:loss = 0.44062576, step = 3000 (0.277 sec) INFO:tensorflow:global_step/sec: 370.982 INFO:tensorflow:loss = 0.4331399, step = 3100 (0.269 sec) INFO:tensorflow:global_step/sec: 366.248 INFO:tensorflow:loss = 0.45120597, step = 3200 (0.273 sec) INFO:tensorflow:global_step/sec: 371.703 INFO:tensorflow:loss = 0.4403404, step = 3300 (0.269 sec) INFO:tensorflow:global_step/sec: 362.176 INFO:tensorflow:loss = 0.42405623, step = 3400 (0.276 sec) INFO:tensorflow:global_step/sec: 363.283 INFO:tensorflow:loss = 0.41672814, step = 3500 (0.275 sec) INFO:tensorflow:global_step/sec: 363.529 INFO:tensorflow:loss = 0.42626005, step = 3600 (0.275 sec) INFO:tensorflow:global_step/sec: 367.348 INFO:tensorflow:loss = 0.4089098, step = 3700 (0.272 sec) INFO:tensorflow:global_step/sec: 363.067 INFO:tensorflow:loss = 0.41276374, step = 3800 (0.275 sec) INFO:tensorflow:global_step/sec: 364.771 INFO:tensorflow:loss = 0.4112524, step = 3900 (0.274 sec) INFO:tensorflow:global_step/sec: 363.167 INFO:tensorflow:loss = 0.39261794, step = 4000 (0.275 sec) INFO:tensorflow:global_step/sec: 362.082 INFO:tensorflow:loss = 0.41160905, step = 4100 (0.276 sec) INFO:tensorflow:global_step/sec: 364.979 INFO:tensorflow:loss = 0.39620766, step = 4200 (0.274 sec) INFO:tensorflow:global_step/sec: 363.323 INFO:tensorflow:loss = 0.39696264, step = 4300 (0.275 sec) INFO:tensorflow:global_step/sec: 361.25 INFO:tensorflow:loss = 0.38196522, step = 4400 (0.277 sec) INFO:tensorflow:global_step/sec: 365.666 INFO:tensorflow:loss = 0.38667366, step = 4500 (0.274 sec) INFO:tensorflow:global_step/sec: 361.202 INFO:tensorflow:loss = 0.38149032, step = 4600 (0.277 sec) INFO:tensorflow:global_step/sec: 365.038 INFO:tensorflow:loss = 0.37832782, step = 4700 (0.274 sec) INFO:tensorflow:global_step/sec: 366.375 INFO:tensorflow:loss = 0.3726803, step = 4800 (0.273 sec) INFO:tensorflow:global_step/sec: 366.474 INFO:tensorflow:loss = 0.37167495, step = 4900 (0.273 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5000... INFO:tensorflow:Saving checkpoints for 5000 into /tmp/tmpbhg2uvbr/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5000... INFO:tensorflow:Loss for final step: 0.36297452.

How To Read Text Files Using Linecache In Python

Solution..

The linecache module implements cache which holds the contents of files, parsed into separate lines, in memory. linecache module returns line/s by indexing into a list, and saves time over repeatedly reading the file and parsing lines to find the one desired.

lincecache module is very useful when looking for multiple lines from the same file.

Prepare test data. You can get this text by just using Google and searching for sample text.

Lorem ipsum dolor sit amet, causae apeirian ea his, duo cu congue prodesset. Ut epicuri invenire duo, novum ridens eu has, in natum meliore noluisse sea. Has ei stet explicari. No nam eirmod deterruisset, nusquam electram rationibus ad sea, interesset delicatissimi et sit. Purto molestiae cu eum, in per hinc periculis intellegam.

Id porro facete cum. No est veritus detraxit facilisis, sit ea clita decore essent. Ut eam labores fuisset menandri, ex sit brute viderer eleifend, altera argumentum vel ex. Duo at zril sensibus, eu vim ullum assentior, quando possit at his.

Te nam tempor posidonium scripserit, eam mundi reprimique dissentias ne. Vim te soleat offendit democritum. Nam an diam elaboraret, quaeque dissentias an has. Autem legendos dignissim ad vis, sea ex amet petentium reprehendunt, inermis constituam philosophia ne mel. Esse noster lobortis usu ne.

Nec reque postea urbanitas ut, mea in nulla invidunt ocurreret. Ei duo iuvaret numquam. Ferri nemore audire te est, mel et detracto noluisse. Nec eu habeo justo, id pro posse apeirian volutpat. Mea sonet quaestio ne.

Atqui quaeque alienum te vim. Graeco aliquip liberavisse pro ut. Te similique reformidans usu, te mundi aliquando ius. Meis scripta minimum quo no, meis prima fabellas eu eam, laoreet delicata forensibus ut vim. Et quo vocibus mediocritatem, atqui summo an eam.

Example import os import tempfile text = """ Lorem ipsum dolor sit amet, causae apeirian ea his, duo cu congue prodesset. Ut epicuri invenire duo, novum ridens eu has, in natum meliore noluisse sea. Has ei stet explicari. No nam eirmod deterruisset, nusquam electram rationibus ad sea, interesset delicatissimi et sit. Purto molestiae cu eum, in per hinc periculis intellegam. Id porro facete cum. No est veritus detraxit facilisis, sit ea clita decore essent. Ut eam labores fuisset menandri, ex sit brute viderer eleifend, altera argumentum vel ex. Duo at zril sensibus, eu vim ullum assentior, quando possit at his. Te nam tempor posidonium scripserit, eam mundi reprimique dissentias ne. Vim te soleat offendit democritum. Nam an diam elaboraret, quaeque dissentias an has. Autem legendos dignissim ad vis, sea ex amet petentium reprehendunt, inermis constituam philosophia ne mel. Esse noster lobortis usu ne. Nec reque postea urbanitas ut, mea in nulla invidunt ocurreret. Ei duo iuvaret numquam. Ferri nemore audire te est, mel et detracto noluisse. Nec eu habeo justo, id pro posse apeirian volutpat. Mea sonet quaestio ne. Atqui quaeque alienum te vim. Graeco aliquip liberavisse pro ut. Te similique reformidans usu, te mundi aliquando ius. Meis scripta minimum quo no, meis prima fabellas eu eam, laoreet delicata forensibus ut vim. Et quo vocibus mediocritatem, atqui summo an eam. """

1. Create a Function to create temporary file and delete it after usage.

def make_tempfile(): """ Function: Create a temporary file. mkstemp() and mkdtemp() to create temporary files and directories args: None return: Temp file name. """ fd, temp_file = tempfile.mkstemp() os.close(fd) with open(temp_file, 'wt') as f: f.write(text) return temp_file def cleanup(temp_file): os.unlink(temp_file)

3. Read specific lines using linecache. The line numbers of files read by the linecache module start with 1, unlike lists which start indexing the array from 0. This is an important point to remember.

import os import tempfile import linecache text = """ Lorem ipsum dolor sit amet, causae apeirian ea his, duo cu congue prodesset. Ut epicuri invenire duo, novum ridens eu has, in natum meliore noluisse sea. Has ei stet explicari. No nam eirmod deterruisset, nusquam electram rationibus ad sea, interesset delicatissimi et sit. Purto molestiae cu eum, in per hinc periculis intellegam. Id porro facete cum. No est veritus detraxit facilisis, sit ea clita decore essent. Ut eam labores fuisset menandri, ex sit brute viderer eleifend, altera argumentum vel ex. Duo at zril sensibus, eu vim ullum assentior, quando possit at his. Te nam tempor posidonium scripserit, eam mundi reprimique dissentias ne. Vim te soleat offendit democritum. Nam an diam elaboraret, quaeque dissentias an has. Autem legendos dignissim ad vis, sea ex amet petentium reprehendunt, inermis constituam philosophia ne mel. Esse noster lobortis usu ne. Nec reque postea urbanitas ut, mea in nulla invidunt ocurreret. Ei duo iuvaret numquam. Ferri nemore audire te est, mel et detracto noluisse. Nec eu habeo justo, id pro posse apeirian volutpat. Mea sonet quaestio ne. Atqui quaeque alienum te vim. Graeco aliquip liberavisse pro ut. Te similique reformidans usu, te mundi aliquando ius. Meis scripta minimum quo no, meis prima fabellas eu eam, laoreet delicata forensibus ut vim. Et quo vocibus mediocritatem, atqui summo an eam. """ def make_tempfile(): """ Function: Create a temporary file. mkstemp() and mkdtemp() to create temporary files and directories args: None return: Temp file name. """ directory = os.getcwd() fd, temp_file = tempfile.mkstemp(dir=directory) os.close(fd) with open(temp_file, 'wt') as f: f.write(text) return temp_file def cleanup(temp_file): os.unlink(temp_file) # Make a file with ipsum data. filename = make_tempfile() print(f"Output n {filename}") split_line = 'n' # Pick the lines from source. print(f"*** Displaying first 5 lines directly from the source n {text.split(split_line)[4]}" ) # pick out the same line from cache print(f" n *** Displaying first 5 lines from the cache n {linecache.getline(filename, 5)}" ) # cleanup the tempfile by using unlink cleanup(filename) Output C:UserssasanPycharmProjectsblogTutorialPointsUpdated_Codetmpazax_yne *** Displaying first 5 lines directly from the source Id porro facete cum. No est veritus detraxit facilisis, sit ea clita decore essent. Ut eam labores fuisset menandri, ex sit brute viderer eleifend, altera argumentum vel ex. Duo at zril sensibus, eu vim ullum assentior, quando possit at his. *** Displaying first 5 lines from the cache Id porro facete cum. No est veritus detraxit facilisis, sit ea clita decore essent. Ut eam labores fuisset menandri, ex sit brute viderer eleifend, altera argumentum vel ex. Duo at zril sensibus, eu vim ullum assentior, quando possit at his.

4. Linecache always includes the newline at the end of the line. Therefore, if the line is empty, the return value is just the newline.

See below.

import linecache # Make a file with ipsum data. filename = make_tempfile() print(f"Output n {filename}") # Blank lines include the newline. print(f"n *** The number of lines in the text is 13." ) print(" n *** Displaying the lastline from Linecache which should be a new linen {!r}".format(linecache.getline(filename, 8)) ) cleanup(filename) Output C:UserssasanPycharmProjectsblogTutorialPointsUpdated_Codetmp352zirvn *** The number of lines in the text is 13. *** Displaying the lastline from Linecache which should be a new line 'n'

5.Conclusion – When an application needs random access to files, linecache makes it easy to read lines by their line number. The contents of the file are maintained in a cache, so be careful of memory consumption.

Update the detailed information about Using Aws S3 With Python Boto3 on the Achiashop.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!