In this post, we will scrape and prepare data for analyzing PCSO lottery results in Excel to take your lottery play to the next level.

Scraping the Data

Let us first install the modules we’re going to need for scraping the website (skip this if you already have them).

1
python -m pip install selenium beautifulsoup4 pandas

And then import them for our project:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Selenium for simulating clicks in a browser
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec

# BeautifulSoup for scraping
from bs4 import BeautifulSoup

# pandas for processing the data
import pandas as pd

# datetime for getting today's date and formatting
from datetime import datetime
from datetime import date
from datetime import datetime, timedelta

Simulate clicks in the Browser with selenium

For simulating clicks in a web browser, we are going to use selenium. We’re also going to need to download a web driver such as Microsoft Edge’s webdriver (of course, you need the corresponding browser Microsoft Edge to be installed). Since I’m on Windows, MS Edge comes right out of the box so I will use it, but feel free to use your own preferred browser that has a driver such as Chrome or Firefox.

Now set the path to where the webdriver executable is as well as the URL to the lottery data is located.

1
2
3
4
5
6
7
# Set the path to where the webdriver executuble is
# For me it's the following:
path = ("C:\\Users\\<Username>\\AppData\\Local\\Programs\\Python" +
         "\\Python39\\Scripts\\msedgedriver.exe")

# Designate the url to be scraped to a variable
url = "https://www.pcso.gov.ph/SearchLottoResult.aspx"

Now initialize a Selenium session by directing it to the webdriver executable.

1
2
3
4
5
# Initialize the Edge webdriver
driver = webdriver.Edge(executable_path=path) 

# Grab the web page
driver.get(url)                              

One problem with the page though is that if you inspect the page’s source there is class called pre-con in a div. If you would try to just have the driver proceed without waiting for a few seconds some of the buttons are unclickable and blocked by this div container, so we have to tell the WebDriver to wait for a set amount of time. I discovered this after a while of troubleshooting why the selenium cannot give any input to the web form.

1
2
3
4
5
# Designate a variable for waiting for the page to load
wait = WebDriverWait(driver, 5)

# wait for the div class "pre-con" to be invisible to ensure clicks work
wait.until(ec.invisibility_of_element_located((By.CLASS_NAME, "pre-con")))

[OUT]: <selenium.webdriver.remote.webelement.WebElement (session="fb056d28fee8b152833d3ed6d8827c99", element="a7ca8ad3-7f46-47c8-8b8b-1b80414f1e51")>

Now that the wait is over let us now proceed to entering our parameters in the ASP.NET web form (the form with filters we see if we visit the web page) for the data we need. We are going to do that by using the .find_element_by_id method here for the options because we know their ids by inspecting the page’s source at the place where the dropdown menus are.

Tip: To inspect the dropdown menu, right-click on it and navigate to “Developer Tools” and select “Inspect” (or press F12). We then get the value inside of the id parameter.

The PCSO search Lotto form
The PCSO search Lotto form

We want the end date to be today to get the latest data from all the games and the start date to be the earliest possible option which is January 1, 2012 in the dropdown menu. As for the lotto game we want all games. We will just split the data up later into smaller dataframes using pandas for each lotto game, so that we only need to scrape the website every time we want to update our data. But first get today’s date which will be used later.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Get today's date with the datetime import
today = date.today()

# Store the current year, month, and day to variables
td_year = today.strftime("%Y")
td_month = today.strftime("%B")
td_day = today.strftime("%d").lstrip("0").replace(" 0", " ")

startyr = int(td_year) - 10
startyr = str(startyr)
print("Today is " + td_month + " " + td_day + ", " + td_year + ".\n")
[OUT]: Today is May 9, 2022.

Now let’s have Selenium and the webdriver find the elements of the form and select the parameters we want in the form options.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Select Start Date as January 1, 2012
start_month = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlStartMonth"))
start_month.select_by_value("January")

start_day = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlStartDate"))
start_day.select_by_value("1")

start_year = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlStartYear"))
start_year.select_by_value(startyr)

# Select End Date as Today
end_month = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlEndMonth"))
end_month.select_by_value(td_month)

end_day = Select(driver.find_element_by_id("cphContainer_cpContent_ddlEndDay"))
end_day.select_by_value(td_day)

end_year = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlEndYear"))
end_year.select_by_value(td_year)

# Lotto Game
game = Select(driver.find_element_by_id(
    "cphContainer_cpContent_ddlSelectGame"))

# If you inspect the page, the value of the option for "All Games" is 0
game.select_by_value('0')

# Submit the parameters by clicking the search button
search_button = driver.find_element_by_id("cphContainer_cpContent_btnSearch")
search_button.click()

Scraping the data using BeautifulSoup

Now it’s time to scrape the data from the current page’s session with BeautifulSoup to get the data we need.

Firstly, feed the page’s source code into Beautiful Soup and then have it find our results by id. Inspect the source again) and get all the table’s rows by their attributes such as class.

1
2
3
4
5
6
# Feed the page's source code into Beautiful Soup
doc = BeautifulSoup(driver.page_source, "html.parser")

# Find the table of the results by id (
rows = doc.find('table', id='cphContainer_cpContent_GridView1').find_all(
    'tr', attrs={'class': "alt"})

Now time to put the data in a python list/dictionary.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Initialize a list to hold our data
entries = []

# Now loop through the rows and put the data into the list to make a table
for row in rows:
    cells = row.select("td")
    entry = {
        "Game": cells[0].text,
        "Combination": cells[1].text,
        "Date": cells[2].text,
        "Prize": cells[3].text,
        "Winners": cells[4].text,
    }
    entries.append(entry)

Processing the Data

Cleaning Up the Data with pandas

Now that we have the data in a list, it is now time to put it in a pandas dataframe and clean it up. There are duplicates in the data if you examine it closely so we have to remove those. We also need to get the data into the proper data types to make it easier for us to process down the line (i.e. sanitization).

1
2
3
4
5
6
7
8
# Turn the list into a DataFrame
df = pd.DataFrame(entries)

# Remove duplicate rows
df.drop_duplicates(inplace=True, keep=False)

# Remove rows that have no combination associated
df = df[df["Combination"] != "-                "]

The part df = df[df["Combination"] != "- "] above is to look for and remove entries that do not have a combination. I also found this after hours of figuring out why I cannot do certain operations on the data like converting them into the proper data types. Speaking of data types, let’s go convert the data now.

1
2
3
4
5
6
7
8
9
# Convert the dates to datetime type
df["Date"] = df["Date"].astype('datetime64[ns]')

# Remove the commas in the prize amounts
df["Prize"] = df["Prize"].replace(',','', regex=True)
    
# Convert data types of Prize and Winners to float and integers
df["Prize"] = df["Prize"].astype(float)     # float because there are still centavos
df["Winners"] = df["Winners"].astype(int)

Now let’s look at our data so far:

1
df

GameCombinationDatePrizeWinners
0Superlotto 6/4918-24-04-26-47-362022-05-0867522822.80
1Suertres Lotto 4PM1-3-92022-05-084500.0279
2EZ2 Lotto 11AM08-042022-05-084000.0251
3EZ2 Lotto 9PM07-042022-05-084000.0784
4Lotto 6/4214-24-08-16-22-372022-05-076051682.00
..................
158804D Vismin0-6-3-32012-01-0240672.07
15881Suertres Lotto 4PM3-9-32012-01-024500.0256
15882EZ2 Lotto 9PM03-132012-01-024000.0540
15883EZ2 Lotto 11AM31-242012-01-024000.068
15884Grand Lotto 6/5544-14-51-52-39-082012-01-0271768080.80

15878 rows × 5 columns

Saving the data to an MS Excel workbook

So far it’s looking good. Since we’re now here it’s time for us to split this huge dataframe of ours into smaller dataframes by the type of lotto game. While we’re at it let’s also fix the time for the Suertres Lotto and EZ2 Lotto games so that they are included in the data and not as a separate category.

After doing that, let’s save that into an Excel workbook so that we do not have to scrape every time we want to analyze the data.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Sort the DataFrame by game
df.sort_values(by=["Game"], inplace=True)

# Get a list of the games
games = df["Game"].unique().tolist()

# Now we can create DataFrames for each game
lotto_658 = df.loc[df["Game"]=="Ultra Lotto 6/58"].copy()
lotto_655 = df.loc[df["Game"]=="Grand Lotto 6/55"].copy()

# Sidenote: Super Lotto 6/49 and Mega Lotto 6/45 have different values from
# what was on the dropdown menu
lotto_649 = df.loc[df["Game"]=="Superlotto 6/49"].copy()
lotto_645 = df.loc[df["Game"]=="Megalotto 6/45"].copy()

# Anyways, continuing...
lotto_642 = df.loc[df["Game"]=="Lotto 6/42"].copy()
lotto_6d = df.loc[df["Game"]=="6Digit"].copy()
lotto_4d = df.loc[df["Game"]=="4Digit"].copy()

lotto_3da = df.loc[df["Game"]=="Suertres Lotto 11AM"].copy()
lotto_3db = df.loc[df["Game"]=="Suertres Lotto 4PM"].copy()
lotto_3dc = df.loc[df["Game"]=="Suertres Lotto 9PM"].copy()

lotto_2da = df.loc[df["Game"]=="EZ2 Lotto 11AM"].copy()
lotto_2db = df.loc[df["Game"]=="EZ2 Lotto 4PM"].copy()
lotto_2dc = df.loc[df["Game"]=="EZ2 Lotto 9PM"].copy()

Let’s look at one of the dataframes:

1
lotto_2da.tail()

GameCombinationDatePrizeWinners
11555EZ2 Lotto 11AM29-282014-09-244000.054
15533EZ2 Lotto 11AM27-162012-03-124000.055
15563EZ2 Lotto 11AM03-082012-03-064000.0206
9533EZ2 Lotto 11AM24-302016-01-064000.0121
2394EZ2 Lotto 11AM27-082020-11-074000.0127

For the Suertres Lotto and EZ2 Lotto games the games are split into 11:00 AM, 4:00 PM, and 9:00 PM games. Let’s fix that by assigning them the proper datetime values in the Date column and combining them into bigger dataframes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Add 11 hours to the datetime for Suertres Lotto 11AM game to match because of time zones
lotto_3da["Date"] = lotto_3da["Date"] + timedelta(hours=11)

# Add 16 hours to the datetime for Suertres Lotto 4PM game to match
lotto_3db["Date"] = lotto_3db["Date"] + timedelta(hours=16)

# Add 21 hours to the datetime for Suertres Lotto 9PM game to match
lotto_3dc["Date"] = lotto_3dc["Date"] + timedelta(hours=21)

# Rename all the game entries as just Suertres Lotto
lotto_3da["Game"] = "Suertres Lotto"
lotto_3db["Game"] = "Suertres Lotto"
lotto_3dc["Game"] = "Suertres Lotto"

# Combine the three Suertres Lotto DataFrames into one
lotto_3d = lotto_3da
lotto_3d = lotto_3d.append(lotto_3db)
lotto_3d = lotto_3d.append(lotto_3dc)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Do the same for EZ2 Lotto
lotto_2da["Date"] = lotto_2da["Date"] + timedelta(hours=11)
lotto_2db["Date"] = lotto_2db["Date"] + timedelta(hours=16)
lotto_2dc["Date"] = lotto_2dc["Date"] + timedelta(hours=21)

# Rename all the game entries as just EZ2 Lotto
lotto_2da["Game"] = "EZ2 Lotto"
lotto_2db["Game"] = "EZ2 Lotto"
lotto_2dc["Game"] = "EZ2 Lotto"

# Combine the three EZ2 Lotto DataFrames into one
lotto_2d = lotto_2da
lotto_2d = lotto_2d.append(lotto_2db)
lotto_2d = lotto_2d.append(lotto_2dc)

Now let’s look at one of them again to see if we were successful:

1
lotto_2da

GameCombinationDatePrizeWinners
2877EZ2 Lotto25-062020-02-21 11:00:004000.054
3858EZ2 Lotto05-062019-07-18 11:00:004000.0164
7987EZ2 Lotto30-252016-12-24 11:00:004000.0223
3876EZ2 Lotto29-292019-07-14 11:00:004000.0231
14423EZ2 Lotto19-252012-11-12 11:00:004000.0135
..................
11555EZ2 Lotto29-282014-09-24 11:00:004000.054
15533EZ2 Lotto27-162012-03-12 11:00:004000.055
15563EZ2 Lotto03-082012-03-06 11:00:004000.0206
9533EZ2 Lotto24-302016-01-06 11:00:004000.0121
2394EZ2 Lotto27-082020-11-07 11:00:004000.0127

1815 rows × 5 columns

Let’s see also one of the combined dataframes:

1
lotto_2d

GameCombinationDatePrizeWinners
2877EZ2 Lotto25-062020-02-21 11:00:004000.054
3858EZ2 Lotto05-062019-07-18 11:00:004000.0164
7987EZ2 Lotto30-252016-12-24 11:00:004000.0223
3876EZ2 Lotto29-292019-07-14 11:00:004000.0231
14423EZ2 Lotto19-252012-11-12 11:00:004000.0135
..................
4686EZ2 Lotto16-272019-01-11 21:00:004000.0402
4681EZ2 Lotto15-082019-01-12 21:00:004000.0249
12361EZ2 Lotto16-292014-03-16 21:00:004000.0530
4695EZ2 Lotto06-202019-01-09 21:00:004000.0335
4290EZ2 Lotto01-202019-04-09 21:00:004000.0339

5334 rows × 5 columns

So far so good. The final stretch here is going to be to save our data to Microsoft Excel and we can achieve that easily with pandas with pandas.DataFrame.to_excel.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Create Excel writer object
writer = pd.ExcelWriter("lotto.xlsx")

# Write DataFrames to excel worksheets
df.to_excel(writer, "All Data")
lotto_658.to_excel(writer, "Ultra Lotto 6-58")
lotto_655.to_excel(writer, "Grand Lotto 6-55")
lotto_649.to_excel(writer, "Super Lotto 6-49")
lotto_645.to_excel(writer, "Mega Lotto 6-45")
lotto_642.to_excel(writer, "Lotto 6-42")

lotto_6d.to_excel(writer, "6 Digit")
lotto_4d.to_excel(writer, "4 Digit")
lotto_3d.to_excel(writer, "Suertres Lotto")
lotto_2d.to_excel(writer, "EZ2 Lotto")

# Save the Excel workbook
writer.save()

Conclusion

With that we have saved an Excel file named lotto.xlsx where all of our scraped data have been put in.