Skip to main content

Posts

Showing posts with the label python

Connecting to Salesforce using Python [aiosfstream]

Connect to Salesforce Streaming Library using python to consume Salesforce Objects.  Library used: aiosfstream Ref link: https://aiosfstream.readthedocs.io/en/latest/quickstart.html#connecting   Quick start:   Authentication : To connect to salesforce streaming API, all clients must authenticate themselves. supports various ways: Username - Password authentication (using SalesforceStreamingClient )   client= SalesforceStreamingClient ( consumer_key="<consumer key>", consumer_secret = "<consumer secret>", username="<username>", password = "<password>" ) # client = Client(auth)     Refresh token authentication   auth = RefreshTokenAuthenticator ( consumer_key = "<consumer key>", consumer_secret = "<consumer secret>", refresh_token = "<refresh_token>" ) client = Client(auth)   Authentication on sand...

Getting started with apache-airflow (Part1)

# Apache airflow quick start link:  https://airflow.apache.org/docs/stable/start.html # export the AIRFLOW_HOME vi ~/.bash_profile # setting AIRFLOW HOME export AIRFLOW_HOME=/User/Desktop/airflow/ cd ~AIRFLOW_HOME # start the virtual environment python3 -m venv ./venv # to show the list of dependencies pip3 list # install apache airflow pip3 install apache-airflow # initialize the airflow database $ airflow initdb # starting the webserver on port 8080 $ airflow webserver -p 8080 Now, we must be able to see Airflow-DAG's on local URL : http://localhost:8080/admin/ # start the scheduler $ airflow scheduler # Try to review the airflow config file found under AIRFLOW_HOME dir or go to UI and then follow the Admin -> Configuration menu. $ cat airflow.cfg We can learn more about airflow features from the configuration files as below: It can store logs remotely in AWS S3 , Google Cloud Storage or Elastic Search ( remote_logs , j...

nba_analysis_with_pandas(Source:RealPython)

exploring_with_pandas_real_python_example Downloading data from web ¶ use requests module response object = request.get(url) response.raise_for_status() - will check if request is success or not download it to a file and write response content to it with open ( target_csv_path , "wb" ) as f : f . write ( response . content ) In [6]: import requests download_url = "https://raw.githubusercontent.com/fivethirtyeight/data/master/nba-elo/nbaallelo.csv" target_csv_path = "nba_all_elo.csv" response = requests . get ( download_url ) response . raise_for_status () # check the request was succesful with open ( target_csv_path , "wb" ) as f : f . write ( response . content ) print ( "Download ready" ) Download ready Reading csv file using pandas ¶ In [7]: import pandas ...