Python API¶
Azure¶
-
class
RPA.Cloud.Azure.
Azure
(region: str = 'northeurope', robocloud_vault_name: str = None)¶ Bases:
RPA.Cloud.Azure.ServiceTextAnalytics
,RPA.Cloud.Azure.ServiceFace
,RPA.Cloud.Azure.ServiceComputerVision
,RPA.Cloud.Azure.ServiceSpeech
Azure is a library for operating with Microsoft Azure API endpoints.
List of supported service names:
computervision (Azure Computer Vision API)
face (Azure Face API)
speech (Azure Speech Services API)
textanalytics (Azure Text Analytics API)
Azure authentication
Authentication for Azure is set with service subscription key which can be given to the library in two different ways.
Method 1 as environment variables, either service specific environment variable for example
AZURE_TEXTANALYTICS_KEY
or with common keyAZURE_SUBSCRIPTION_KEY
which will be used for all the services.Method 2 as Robocorp Vault secret. The vault name needs to be given in library init or with keyword
Set Robocloud Vault
. Secret keys are expected to match environment variable names.
Method 1. subscription key using environment variable
*** Settings *** Library RPA.Cloud.Azure *** Tasks *** Init Azure services # NO parameters for client, expecting to get subscription key # with AZURE_TEXTANALYTICS_KEY or AZURE_SUBSCRIPTION_KEY environment variable Init Text Analytics Service
Method 2. setting Robocloud Vault in the library init
*** Settings *** Library RPA.Cloud.Azure robocloud_vault_name=azure *** Tasks *** Init Azure services Init Text Analytics Service use_robocloud_vault=${TRUE}
Method 2. setting Robocloud Vault with keyword
*** Settings *** Library RPA.Cloud.Azure *** Tasks *** Init Azure services Set Robocloud Vault vault_name=googlecloud Init Text Analytics Service use_robocloud_vault=${TRUE}
References
List of supported language locales - Azure locale list
List of supported region identifiers - Azure region list
Examples
Robot Framework
This is a section which describes how to use the library in your Robot Framework tasks.
*** Settings *** Library RPA.Cloud.Azure *** Variables *** ${IMAGE_URL} IMAGE_URL ${FEATURES} Faces,ImageType *** Tasks *** Visioning image information Init Computer Vision Service &{result} Vision Analyze image_url=${IMAGE_URL} visual_features=${FEATURES} @{faces} Set Variable ${result}[faces] FOR ${face} IN @{faces} Log Age: ${face}[age], Gender: ${face}[gender], Rectangle: ${face}[faceRectangle] END
Python
This is a section which describes how to use the library in your own Python modules.
library = Azure() library.init_text_analytics_service() library.init_face_service() library.init_computer_vision_service() library.init_speech_service("westeurope") response = library.sentiment_analyze( text="The rooms were wonderful and the staff was helpful." ) response = library.detect_face( image_file=PATH_TO_FILE, face_attributes="age,gender,smile,hair,facialHair,emotion", ) for item in response: gender = item["faceAttributes"]["gender"] age = item["faceAttributes"]["age"] print(f"Detected a face, gender:{gender}, age: {age}") response = library.vision_analyze( image_url=URL_TO_IMAGE, visual_features="Faces,ImageType", ) meta = response['metadata'] print( f"Image dimensions meta['width']}x{meta['height']} pixels" ) for face in response["faces"]: left = face["faceRectangle"]["left"] top = face["faceRectangle"]["top"] width = face["faceRectangle"]["width"] height = face["faceRectangle"]["height"] print(f"Detected a face, gender:{face['gender']}, age: {face['age']}") print(f" Face rectangle: (left={left}, top={top})") print(f" Face rectangle: (width={width}, height={height})") library.text_to_speech( text="Developer tools for open-source RPA leveraging the Robot Framework ecosystem", neural_voice_style="cheerful", target_file='output.mp3' )
-
ROBOT_LIBRARY_DOC_FORMAT
= 'REST'¶
-
ROBOT_LIBRARY_SCOPE
= 'GLOBAL'¶
-
class
RPA.Cloud.Azure.
AzureBase
¶ Bases:
object
Base class for all Azure servives.
TOKEN_LIFESPAN is in seconds, token is valid for 10 minutes so max lifetime is set to 9.5 minutes = 570.0 seconds
-
COGNITIVE_API
= 'api.cognitive.microsoft.com'¶
-
TOKEN_LIFESPAN
= 570.0¶
-
logger
= None¶
-
region
= None¶
-
robocloud_vault_name
: str = None¶
-
services
: dict = {}¶
-
set_robocloud_vault
(vault_name)¶ Set Robocorp Vault name
- Parameters
vault_name – Robocorp Vault name
-
token
= None¶
-
token_time
= None¶
-
-
class
RPA.Cloud.Azure.
ServiceComputerVision
¶ Bases:
RPA.Cloud.Azure.AzureBase
Class for Azure Computer Vision service
-
init_computer_vision_service
(region: str = None, use_robocloud_vault: bool = False) → None¶ Initialize Azure Computer Vision
- Parameters
region – identifier for service region
use_robocloud_vault – use secret stored into Robocorp Vault
-
vision_analyze
(image_file: str = None, image_url: str = None, visual_features: str = None, json_file: str = None) → dict¶ Identify features in the image
- Parameters
image_file – filepath of image file
image_url – URI to image, if given will be used instead of image_file
visual_features – comma separated list of features, for example. “Categories,Description,Color”
json_file – filepath to write results into
- Returns
analysis in json format
See Computer Vision API for valid feature names and their explanations:
Adult
Brands
Categories
Color
Description
Faces
ImageType
Objects
Tags
-
vision_describe
(image_file: str = None, image_url: str = None, json_file: str = None) → dict¶ Describe image with tags and captions
- Parameters
image_file – filepath of image file
image_url – URI to image, if given will be used instead of image_file
json_file – filepath to write results into
- Returns
analysis in json format
-
vision_detect_objects
(image_file: str = None, image_url: str = None, json_file: str = None) → dict¶ Detect objects in the image
- Parameters
image_file – filepath of image file
image_url – URI to image, if given will be used instead of image_file
json_file – filepath to write results into
- Returns
analysis in json format
-
vision_ocr
(image_file: str = None, image_url: str = None, json_file: str = None) → dict¶ Optical Character Recognition (OCR) detects text in an image
- Parameters
image_file – filepath of image file
image_url – URI to image, if given will be used instead of image_file
json_file – filepath to write results into
- Returns
analysis in json format
-
-
class
RPA.Cloud.Azure.
ServiceFace
¶ Bases:
RPA.Cloud.Azure.AzureBase
Class for Azure Face service
-
detect_face
(image_file: str = None, image_url: str = None, face_attributes: str = None, face_landmarks: bool = False, recognition_model: str = 'recognition_02', json_file: str = None) → dict¶ Detect facial attributes in the image
- Parameters
image_file – filepath of image file
image_url – URI to image, if given will be used instead of image_file
face_attributes – comma separated list of attributes, for example. “age,gender,smile”
face_landmarks – return face landmarks of the detected faces or not. The default value is False
recognition_model – model used by Azure to detech faces, options are “recognition_01” or “recognition_02”, default is “recognition_02”
json_file – filepath to write results into
- Returns
analysis in json format
Read more about face_attributes at Face detection explained:
age
gender
smile
facialHair
headPose
glasses
emotion
hair
makeup
accessories
blur
exposure
nouse
-
init_face_service
(region: str = None, use_robocloud_vault: bool = False) → None¶ Initialize Azure Face
- Parameters
region – identifier for service region
use_robocloud_vault – use secret stored into Robocorp Vault
-
-
class
RPA.Cloud.Azure.
ServiceSpeech
¶ Bases:
RPA.Cloud.Azure.AzureBase
Class for Azure Speech service
-
audio_formats
= {'MP3': 'audio-24khz-96kbitrate-mono-mp3', 'WAV': 'riff-24khz-16bit-mono-pcm'}¶
-
init_speech_service
(region: str = None, use_robocloud_vault: bool = False) → None¶ Initialize Azure Speech
- Parameters
region – identifier for service region
use_robocloud_vault – use secret stored into Robocorp Vault
-
list_supported_voices
(locale: str = None, neural_only: bool = False, json_file: str = None)¶ List supported voices for Azure API Speech Services.
- Parameters
locale – list only voices specific to locale, by default return all voices
neural_only – True if only neural voices should be returned, False by default
json_file – filepath to write results into
- Returns
voices in json
Available voice selection might differ between regions.
-
text_to_speech
(text: str, language: str = 'en-US', name: str = 'en-US-AriaRUS', gender: str = 'FEMALE', encoding: str = 'MP3', neural_voice_style: Any = None, target_file: str = 'synthesized.mp3')¶ Synthesize speech synchronously
- Parameters
text – input text to synthesize
language – voice language, defaults to “en-US”
name – voice name, defaults to “en-US-AriaRUS”
gender – voice gender, defaults to “FEMALE”
encoding – result encoding type, defaults to “MP3”
neural_voice_style – if given then neural voice is used, example style. “cheerful”
target_file – save synthesized output to file, defaults to “synthesized.mp3”
- Returns
synthesized output in bytes
Neural voices are only supported for Speech resources created in East US, South East Asia, and West Europe regions.
-
-
class
RPA.Cloud.Azure.
ServiceTextAnalytics
¶ Bases:
RPA.Cloud.Azure.AzureBase
Class for Azure TextAnalytics service
-
detect_language
(text: str, json_file: str = None) → dict¶ Detect languages in the given text
- Parameters
text – A UTF-8 text string
json_file – filepath to write results into
- Returns
analysis in json format
-
find_entities
(text: str, language: str = None, json_file=None) → dict¶ Detect entities in the given text
- Parameters
text – A UTF-8 text string
language – if input language is known
json_file – filepath to write results into
- Returns
analysis in json format
-
init_text_analytics_service
(region: str = None, use_robocloud_vault: bool = False)¶ Initialize Azure Text Analyticts
- Parameters
region – identifier for service region
use_robocloud_vault – use secret stored into Robocorp Vault
-
key_phrases
(text: str, language: str = None, json_file: str = None) → dict¶ Detect key phrases in the given text
- Parameters
text – A UTF-8 text string
language – if input language is known
json_file – filepath to write results into
- Returns
analysis in json format
-
sentiment_analyze
(text: str, language: str = None, json_file: str = None) → dict¶ Analyze sentiments in the given text
- Parameters
text – A UTF-8 text string
language – if input language is known
json_file – filepath to write results into
- Returns
analysis in json format
-