This is an introductory tutorial on how to conduct data analysis and visualization using a famous data analysis library called Pandas. Here is a quick summary of what will be covered in this tutorial:
Let’s proceed to the next section and start installing the necessary packages.
It is highly recommended to create a virtual environment before you continue. Activate it and run the following commands to install all the required dependencies:
Previously, I have covered a beginner’s guide to Locust in Introduction to Locust: An Open Source Load Testing Tool in Python. In this article, let’s explore a little more with four useful advanced features that are available in Locust:
In fact, all of the features mentioned above are not new and have been around in the Locust package for quite some time. Learning these features helps to improve your load testing and make your life easier.
Let’s proceed to the next section and…
I have covered quite a number of tutorials on FastAPI in which servers are deployed with Uvicorn, a fast-lighting ASGI web server. At the time of this writing, Uvicorn currently only supports HTTP/1.1 and WebSockets. Based on the official documentation, support for HTTP/2 is planned but there is no estimation time on the completion.
HTTP/2 is a successor to the old HTTP/1 which comes with decrease latency while maintaining the same high-level semantics (methods, header fields, status codes, etc). Based on Wikipedia, it improves the loading of web pages via:
By reading this piece, you will learn to perform natural language processing task on Khmer language in Python. For your information, Khmer is the official language of Cambodia and used widely in Thailand (East and Northeast) and Vietnam (Mekong Delta).
Having a specialized language processing toolkit helps a lot when building any NLP related application which supports multiple languages. In this article, you will utilize an open-source library called.
khmer-nltk. Based on the official documentation,
khmer-nltk is a Khmer language processing toolkit build using conditional random fields.
At the time of this writing, it supports the following NLP tasks:
Profiling a python program is conducting a dynamic analysis that measures the program execution time — how much time the code takes to execute each program’s function. As functions and calls require too many resources, it is necessary to optimize them. And code optimization inevitably leads to cost optimization because it uses fewer CPU resources, which means paying less for the cloud infrastructure.
Developers often use varied approaches for local optimization. For example, they determine which function is quicker in executing the code. …
Previously, I have covered an article on Sarcasm Text Classification using spaCy in Python. In this piece, you will learn more on the Named-Entity Recognition (NER) component instead.
For your information, NER is part of the NLP tasks for locating and classifying entities that are present in unstructured text into different categories. For example, given the following sentence:
John Doe bought 100 shares of Apple in 2020.
A NER model is expected to identify the following entities and their corresponding category:
This tutorial focuses on training a custom NER component to identify drug…
Speech-to-Text functionality has been gaining momentum recently as it offers a whole new user experience to users. It is being widely adopted by companies in the market especially in the customer services industry. In fact, big players such as Google and Microsoft provide their own Speech-to-Text API as part of their technologies.
For your information, most of the advanced Speech-to-Text APIs comes with word-level timestamps.
For example, you will get the following output when running Google’s Speech-to-Text API:
Most of the time, a streaming response is the preferred choice when returning audio or video files from a server. This is mainly because streaming responses work really well for large files — especially those that exceed 1GB in file size.
In this tutorial, you will learn to:
Let’s proceed to the next section and start installing the necessary modules.
It is highly recommended to create a virtual…
By reading this article, you will learn to extend the documentation of FastAPI to include multiple examples for all the requests and responses. This works for both Swagger UI and ReDoc endpoints.
For example, you will be able to achieve the following result in ReDoc:
By reading this article, you will learn to train a sarcasm text classification model and deploy it in your Python application. Detecting the presence of sarcasm in text is a fun yet challenging natural language processing task.
This tutorial focuses mainly on training a custom multi-classification spaCy’s
TextCat component. If you are just starting out or have your own use cases, all you need to do is to swap out the dataset with the one that you preferred. The setup and training process is more or less the same with some minor changes to the configuration.
To keep it simple…