Cards as Nested Resources in the Trello API

If you’re here for Trello API basics…

I’m going to skip over a lot of the introductory stuff about the Trello API – if you want to read more about the basics, you’ll want to read their Trello API Introduction.

What am I working on?

I’m digging into a project where I want to pull Trello data into a Redshift warehouse and then query it using Sisense for Cloud Data Teams (formerly Periscope). My primary objective is retrieving card data, although the results of this first task will undoubtedly uncover additional similar tasks.

When you read Trello’s API docs, they make it abundantly clear up front that *everything is nested*.


Since everything is nested, and there are so many *relationships* between objects, there are many *methods* to retrieve data. Put another way, there are different *lenses* you can use to view objects. And the request URLs you use are the key to retrieving data in these different ways.

Case in point – retrieving card data.



So, it’s like there’s a direct URL (URL params) to a card or cards, or even fields in those cards.:

“Show me this board, and the cards in this board, and these fields in these cards.”

But then there’s a different URL (query params) that allow you to ask a question:

“Show me this board, and cards in this board where conditions x, y, and z are true.”

The only way I’ve been able to wrap my head around this is to make requests both ways and see what kind of response is returned.

URLs and their responses

Query params

Let’s start with a URL using query params:{{board}}/?cards=all&card_fields=name,idShort,dateLastActivity&card_modifiedSince=2020-03-22T20:03:04.339Z

In this query, I’m asking for specific fields from all cards from a specific board that were modified since a certain datetime.

The response gives me several things:

  1. Information about the board. I didn’t ask for this, but apparently Trello is going to give it to me anyway. The first 98 lines of my prettified JSON response tell me the board ID, name, URL, short URL, preferences, styles, labels, etc.
  2. Information about the cards. I did ask for this, and Trello gives me exactly what I asked for.

cards 1

Okay, great. That’s one way to get the information I need – use query params to ask a question and get an answer.

URL params

Now, let’s look at what they call URL params – the direct approach, rather than the question approach:{{board}}/cards?filter=all&fields=name,idShort,dateLastActivity&since=2020-03-22T20:03:04.339Z

Rather than viewing this as a question, I’m kind of pointing to the cards directly. I want specific fields from all cards from a specific board where the card was created after a certain datetime.

Note that last bit – when you use URL params, card_modifiedSince is *not* one of the parameters you can use. You have to use before or since, which only references the creation date of the card. Again, different lenses looking at the same data.

The response gives me those specific card fields:

cards 2

Long story made short…

It’s up to you which method you use to request data from Trello’s API. You have to be careful to know which *lens* you’re trying to look through. For the purposes of my project, I’m going to stick with the query params. I only have one board and I want to know when a card was last updated in order to determine whether or not to get its data.

Have you used Trello’s API? Can you better summarize the difference between query params and URL params? What has made the most sense for you? I’d love to hear your feedback.

Web Scraping with Tampermonkey

I ran into a scenario today where a client needed data scraped from a website, but the website offers no API, cURL, or fetch capabilities as far as I can tell. The data can be presented in a datatable, but this would require some scripting to run through multiple pages of results, each row being dumped into an output file. Alternatively, the website offers a CSV export button. I like the idea of having pre-formatted, clean data exported from the site on a schedule – the other half of the battle is figuring out how to move that data into a warehouse. But in this blog post I’ll cover the scraping portion, and perhaps I’ll write again when I figure out the rest.

As I was examining a variety of scraping options I ran into this post over at Stack Overflow:

The author pivoted from their initial attempts to what they considered to be a more elegant solution – using a Chrome extension called Tampermonkey to run a script on a particular web page, and automating this process with a .bat file. I figured I’d give it a try.

Installing Tampermonkey was really easy. You visit their extension page in the Chrome Web Store, click the Add To Chrome button, and give the extension some permissions – poof, you’re in business. Once the extension has been added, you can hover over it and click on the “Create a new script…” option to get started. The interface is simple – they provide a template containing some comments/options you can configure and then a Javascript editor where you can drop in your code.


You can even run their built-in syntax checker to ensure that you don’t have any particularly stupid errors. My one big complaint about Tampermonkey is that there isn’t an error log built into it, but it’s not a big deal to just run Chrome’s developer tools while you’re building and testing to track any potential bugs.

In a nutshell, when you set a URL value in the @match field, Tampermonkey will execute your script every time Chrome opens that address. This works great for me – the report I want to scrape is always at a particular URL. I also needed to set some start and end dates before clicking a button, and of course Javascript provides some simple DOM methods (e.g. document.getElementById(), document.querySelector(), etc.) to manipulate input fields. Seven lines of code and I’m scraping the report every time my browser opens the URL.

To fully automate this process, I set up a .bat file that will run once a day:

start chrome ""
timeout 10
move "C:\path\to\move\from.csv" "C:\path\to\move\to.csv"
taskkill /F /IM chrome.exe /T > nul

Piece of cake! The .bat file opens Chrome, directs it to the page where Tampermonkey will run, and then moves the downloaded file from my Downloads folder to a predetermined location. Then Chrome’s task gets killed, eliminating any warning messages.

Using Excel to List and Count Files

On one of my recent web scraping projects, I got a follow-up request from a client to add some additional data to my results. The scraped data was dumped into individual folders in PDF format, often with multiple documents per folder. The client wanted to know if I could provide them with a count of the number of PDF files in each folder – and there were thousands of folders. It wasn’t going to happen manually. And I hate VBA, so I was hoping to find a quick and dirty formula-based solution.

A quick Duck Duck Go search gave me the answer I needed – using the now-deprecated Excel “Files” formula, I was able to pull it off.

Here’s a link to a really, really helpful rundown on setting up the Files formula in the latest versions of Excel.

I had to make a few modifications to suit my unique case.

The =COUNTA(listFiles) formula worked, but only for one directory. I needed to go through thousands of directories. So I modified the listFiles formula range to include more than one row. When defining the formula name, I entered the range as:


Here’s what that looks like in Excel:


My directory names were based on a value in each row, so it was easy to set up a formula that would build the directory name and swap in the correct value in each row. For example:

="C:\Users\doug\Documents\Main Folder\"&A2&"\*"

Where cell A2 contains the folder name. Drag that down and you’ve got a column of directories.

One final modification – in this case, I had a mixture of PDF files and other file types in each directory, but I only wanted to count PDF files. By modifying the directory name, my formulas would only count files ending in PDF:

="C:\Users\doug\Documents\Main Folder\"&A2&"\*pdf"

The * is a wildcard, so any characters can appear in the file name before the PDF extension.

Then all you have to do is put the =COUNTA(listFiles) formula in another column and boom, you’ve got a count of the number of PDF files in each directory.

The only drawback to this seems to be that when there are *no* PDF files in a directory, a value of “1” is still returned in the count formula. I’m not sure why that is, and frankly it didn’t matter a whole lot to us so I didn’t spend any time resolving that little oddity. But it’s good to know.

Building and POSTing a JSON Payload with JavaScript

I’ve been automating some QuickBooks data entry for a client, and in doing so I needed to figure out how to build and POST a JSON payload with vanilla JavaScript. Seems like the kind of thing other people might be interested in – I’m not going to share my full code base because I’m still refining things, but here’s what you need to know to perform some basic functions.

Building a JSON Payload

Before you can POST anything via the API, you’ve got to structure your data. JSON (JavaScript Object Notation) is a really easy way to do this. For some systems, the alternative is XML.

If I’ve got a spreadsheet and I want to turn each row into JSON which can be POSTed, here’s some code that will get the job done:

// loop through each row and build the data to be posted
rowData.forEach(function(row) {
  var data = {
    "TotalAmt": row[4], 
    "CustomerRef": {
    "value": row[2]

// define the payload and stringify it
var payload = JSON.stringify(data);

So, of course, this is just one chunk of code in a much larger set of scripts. But the goal here is to say for each row, build this variable (data) that includes some values. And all of the punctuation and symbols have to be in there as well to properly build your JSON. The last step is to assign a new variable (payload) and perform the good ol’ JSON.stringify on your data variable, which now contains all of the values and correct syntax necessary to be turned into JSON.


var response = UrlFetchApp.fetch(url, {
            headers: {
              Accept: 'application/json',
              Authorization: 'Bearer ' + service.getAccessToken()
            method: 'post',
            payload: payload,
            contentType: 'application/json'

So remember, you’ve got your JSON payload assigned as variable “payload”. Now we need to plug that payload into an API POST call. To do that, we assign a new variable (response) and we use UrlFetchApp.fetch (yep, we’re using Google Apps Scripts) to send a bunch of information to a specific URL. I didn’t include the variable that contains the URL, so just pretend it’s a good URL 🙂

The data you send to the URL contains headers, as well as information about what’s being sent (application/json) and this case we also send our access token (generated using some additional scripts). Our method is “post” – if you’re familiar with CRUD, this is the RESTful version of “create”. We define our payload as variable “payload”, and we again state that we’re sending JSON data. And that’s it! You know, along with a bunch of other stuff like try/catch for errors and some if/else stuff to make sure we’re putting our JSON in the right QuickBooks company.

Turning Trello into a Fully Functional Ticket System

If you haven’t heard of Trello by now…have you been living under a rock?


Seriously, though – Trello is a powerful and user-friendly board-style app. People use it for all sorts of things, but for me it has always been a handy organization system for my work. Trello boards consist of lists and cards; cards can be moved from list to list and can contain descriptions, checklists, attachments, and comments. You can even assign members of your board to a card.


I set up my boards with Backlog, Do, Doing, and Done lists. Cards start on the left and move, column by column, until they’re done.

trello board

For a long time I was the only member of my boards, but now that I’ve been able to bring on some freelance help I’ve also been able to add them into my boards, assign cards (tickets) to them, and communicate with them on a card-by-card basis. It’s really nice!

One thing that has always held me back, however, is that Trello has not been my one-stop-shop for tracking work. Trello doesn’t natively come with time tracking in cards, and it doesn’t natively come with a way to email status updates to clients who aren’t a member of the board. But I recently found workarounds for both limitations, and they have turned my Trello boards into powerful tools to track my work *and* communicate with clients.

Tracking Time in Trello

Time tracking apps are a dime-a-dozen, but I’d like to focus on a service called Toggl. Toggl integrates with 100+ services, including Trello, and with their Chrome-powered Toggl Button Extension you put a time tracker in every Trello card. Heck, you can put a time tracker on every checklist item. Want to know how much time you spent on every aspect of your gig? Toggl can do it.

toggl in trello

Click here to read more about the Toggl Button Extension. I would note that while the extension is available in several browsers, I’ve found that it only embeds a tracker in my Trello cards and checklists on Chrome. Maybe that will change down the road.

Sending Notifications for Trello-powered Tickets

Zapier is an integration app that uses webhooks to connect various cloud systems, which is awesome. An action in App A magically triggers an action in App B. And by combining the power of Zapier with the content in your Trello boards and cards, you can easily automate notifications to your clients when cards get moved into your Done list, for example.

trello zapier

In Zapier, you’ll need to connect to Trello and your email provider of choice – I use Gmail.

You’ll then need to set up your trigger. Zapier will walk you through the set-up process. By the end, you’ll be selecting your Trello Board, List, and determining which card(s) this trigger will apply to. I set this to all cards.

zapier 1

Next up, you’ll set up your Gmail action. When a card is added to my Done list, I want to send an email to my client to let them know that the work has been completed. I’d also like to remind them what the work was to begin with, so there are a couple of fields we need to import from the Trello card.

For starters, I set the Gmail “To” field to match the Trello “Card First Comment Data Text” field. To make this work, right before I move the card to the Done list, I enter the client’s email address as a comment in the card.

Gmail 1

Then I manually fill in the From, From Name, and Reply To fields with my own information.

I set my email Subject to match the Trello “Card Title” field.

Gmail 2

Then in my email Body I add a nice message to the client letting them know that the work is done. You can see in the image below that these fields can be sent in-line with plain text, which is an awesome way to create automated but conversational messages for your clients. I even include the Trello card’s “Description” field in the email to remind my clients what I was working on.

Gmail 3

You can even add attachments to the email based on your Trello card’s comments – that could be the description, or data from checklists and attachments, etc. Whatever Trello offers, you can plug it into your email message.

And voila! Finish the set-up process by testing your Zap (that’s what Zapier calls these integrations) and your customers will get a pleasant notification even time you complete work on a ticket.

Sending Values from an HTML Form to a JavaScript File

I was doing some UX/UI tweaking on a Google Sheet today and did some experimentation with custom menus vs. custom sidebars.

Menus are nice when you just need to execute script files. But when you need to pass values into a script, your options are:

A) making the user enter data into cells, or

B) setting up a custom HTML sidebar

As a developer, it would probably be easiest to create your “form” in the sheet. This, however, would leave your sheet vulnerable to accidental changes. User deletes a row here or a column there and all of a sudden your formulas and scripts break. So from a UX/UI standpoint, I think it’s easiest for the end user to enter values into an HTML form they can’t change.

This raises a new challenge – how do you get data from the custom front-end HTML sidebar into your server-side JavaScript file?

In Your HTML File

<div id="stepThree" style="background-color:white;padding:5px;margin:5px;"><strong>Step 3:</strong></div>

The code above is just a div block with some text and a button. Note that the onClick action makes a call to the callApiButton() script. This callApiButton() script lives in the HTML file as well, right above the closing tag.

function callApiButton() {'companyForm'));
var stepThree = document.getElementById('stepThree'); = 'lightgreen';

This script does several things; first, it tells us that callApiButton() executes a script called callAPI and passes through the contents of the HTML element with an ID of companyForm. It also contains code that will turn the div containing this button green when it has been pressed – just a little visual confirmation for the user that they’re on the right track.

In Your JavaScript File

function getCompany(form) {
var companyName = form.companyNames;

This code is the bridge between your HTML and JS. In the HTML file we grabbed the value that the user gave us in the companyForm, and we passed it into the callAPI() script. The callAPI() script passes that same value into the getCompany() script and assigns that value to a variable – note that var companyName = form.companyNames, not form.companyForm. Back in our HTML file, we give our <select> a name of “companyNames” and in our JS file we’ve got to look at the <select> element to get our value.

When all of this is put together, the end user sees a form in the sidebar where they can select a company name from a drop-down list. Their selection is fed into a form that is submitted using a button, and then passed into our server-side API call.

Think you could do this better? I’m all ears – seriously! I’m always looking for ways to improve my code.

Switching Between Companies with the QuickBooks API

I’ve got a client whose accounting team manages several companies in QuickBooks. We’ve automated some processes for them using the ultra-useful QuickBooks Accounting API and the powerful flexibility of Google Sheets, but up until now I’ve been setting up API connections on a per-company basis, which is kind of a pain to manage and would never be user-friendly for the accounting staff. So I wrote some code to make company switching a much easier process, and I’ve got some ideas for how to improve it further.

Step 1: use Data Validation to turn a cell into a drop-down selection tool

data validation

Data validation is super easy to use. Pick your cell(s), select the “List of items” option, and enter your company names. You wind up a with a cell in your report where the end user can select the company they’ll be working with. This is going to drive the API call.

Step 2: Create an array of your companies and their QuickBooks IDs

Using the Google Sheets Script Editor, set up a .gs file – even the default will do.

Create an array of companies, like this:

["Company 1", "1234567890"],
["Company 2", "9876543210"],
["Company 3", "1357924680"]

You’re ready to move on.

Step 3: Set up the rest of your scripts

I’m a big fan of using templates whenever possible – no point in reinventing the wheel when you don’t have to. So I highly recommend this OAuth2 for Apps Scripts Github repo which contains all of the building blocks you’ll need to connect Sheets to QuickBooks.

Step 4: Some slight modifications are in order.

function getCompany() {
var companyName = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Input").getRange("I1").getValue();
for (i = 0; i < YOUR_COMPANIES.length; i++) {
if (YOUR_COMPANIES[i][0] == companyName) {
var companyId = YOUR_COMPANIES[i][1];
return companyId;
} else {
Logger.log("No match found");

All this does is figure out which company in the array matches the company the end user selected, and then it returns the company’s QuickBooks ID so that it can be passed into another script.

We take that company ID and we pass it into the URL of our GET request.

//get the company ID
var companyId = getCompany();

// call the API, return the object, and parse
var url = '' + companyId + '/query?query=Select%20%2a%20from%20Department&minorversion=4'

This is great – we have given our end user the ability to select which company’s data they’d like to GET via the API. But the biggest challenge remains – every time they need to change companies, they can’t just pass in a new company ID through the URL. QuickBooks uses OAuth2 which requires the end user to authenticate to a specific company, and then QuickBooks returns a token for that company. So when we want to switch companies, we need to break the connection and then re-authenticate.

Step 5: Let them reset the connection and re-establish it with a new company

I built this handy little custom menu to let the end user do all this work without really realizing everything that is entailed. That’s the beauty of good code – the end user just knows that it works 🙂

custom menu

When they click on Reset Connection, it runs the reset() function from the OAuth2 template.

When they click on QuickBooks Verification, it runs the makeRequest() function from the OAuth2 template. Our code runs, and since we haven’t authorized yet, we execute this ELSE statement:

} else {
var authorizationUrl = service.getAuthorizationUrl();
Logger.log('Open the following URL and re-run the script: %s', authorizationUrl);
var htmlOutput = HtmlService
.createHtmlOutput('Click the following link to continue: Click Here')
SpreadsheetApp.getUi().showModalDialog(htmlOutput, 'Verification');

So in the OAuth2 template, the script just sends the authorization URL to the log. I needed to send the URL to the end user so they could click on it, so I created a modal that pops up and gives them a link to click. The end user clicks the link, they authorize to the company they originally selected, and then they redirect to a success page. Then they can go back to the Sheet and run the Refresh Customers menu item, which unbeknownst to them just runs the makeRequest() function again. This time, since they’re authorized, their data refreshes. Poof. Magic.

How could I make this better? For starters, I’d like to move all of this into a custom sidebar. Rather than using Data Validation to turn a cell into a drop-down menu, just pop open a sidebar where the drop-down menu can live. And then move the three custom menu items into the sidebar as well so all of their selections and clicks can happen in one place.

What else would you do to make this easier for the end user?

Sheets to Docs Mail Merge

TL;DR: I needed a way to perform a mail merge, passing data from a Google Sheet over to a Doc. Nearly everything works, with the exception of two complicating factors – the template file I’m copying includes a table, and one of the table cells must contain an image. I’m stumped!

If you’re anything like me, when you think “mail merge” you don’t think “G Suite”. For years I’ve been mail merging with Excel and Word because…it’s easy. You open the wizard, you click some buttons, and poof – labels, envelopes, whatever. But a recent project requiring a QR code API and product labels has me digging into the depths of G Suite for a solution that will allow me to get the QR codes via an API and then feed the code along with some product data from a Google Sheet.

Pulling in the QR codes was a piece of cake.

// Writes images to a spreadsheet
function generateCodes() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
var firstRow = 2;
var lastRow = sheet.getLastRow();
var barCol = 2;
var codeCol = 8;

for (var i = firstRow; i <= lastRow; i++) {
var currentValue = sheet.getRange(i,barCol).getValue();
var url = "" + currentValue;
var urlRequest = UrlFetchApp.fetch(url).getBlob();
var qrValue = sheet.insertImage(urlRequest,codeCol,i);

// Deletes all images in a spreadsheet
function clearCodes() {
var images = SpreadsheetApp.getActiveSpreadsheet().getImages();
for (var i = 0; i < images.length; i++) {

After writing that code, I realized it would be even easier to just insert a Sheets formula – I don’t really need to use a FOR loop to write images to the spreadsheet, just point to their location. Just use something like:

=arrayformula(image(hyperlink("" & B2:B9)))

That formula grabs the base URL for the QR code API, tacks on the barcode or product ID from column B, and spits out the QR code itself into an array of cells.

Now that the QR codes are created, it’s time to mail merge. This is where things get tricky.

There are a couple of decent add-ins already out there that will do this – autoCrat was the best that I found. It had a slick interface, a lot of customization, and best of all – it worked. But how does it work? I needed to find out. So I did some digging to see if I could write some code that would do basically the same thing, just without most of the polish.

I found this StackExchange question and answer where a user basically laid out exactly what needed to happen to create a basic mail merge. They even went as far as to dump the code in Github. So I ran the code, and it almost all worked – everything except the QR code passed through.

I logged some test results, and I figured out that the script sees my template as a paragraph, a table, and another paragraph element. It’s writing text to the paragraphs elements just fine, but when it comes to writing the QR code to the table, it’s not producing the intended result. If I change the field that I’m trying to write to the table – say, to regular text – then it works. So there has to be something about the QR code image that this script doesn’t like.

} else if (child.getType() == DocumentApp.ElementType.TABLE) {

Maybe I’m writing the table to the Doc, but the image isn’t being written to the cell? Or maybe the problem is in the FOR loop that overwrites the existing template contents?

for (var f = 0; f < fieldNames.length; f++) {
body.replaceText("\\[" + fieldNames[f] + "\\]", row[f]);//replace [fieldName] with the respective data value

I’m at a loss. What do you think?

You Get What You Pay For

I learned an important lesson this week.

In my role as a professional web designer, I do my best to respect my clients’ budgets. I know that not everyone has $40,000 laying around for a website. Sometimes – especially when using an open-source system like WordPress – you’ll find there are multiple ways to get the job done. For example, if you want a plug-in that can modify your functions.php file, a quick search will turn up half a dozen viable options. And usually there’s at least one free option that will work out just fine. No need to pass along the cost of a premium plug-in to the customer. Everybody wins.

But on my latest project we ran into a problem. My client needs to offer financing options by means of a subscription system – the end user should be able to pick between 36, 60, and 120-month financing options. Each comes with a different monthly cost, of course. So, as I usually do, I browsed the plug-in marketplace and found that there were a number of options that promised to handle subscriptions. And since my client wanted to use Stripe as the payment processor, and Stripe also offers the ability to set up subscriptions, I wanted to use a plug-in that would allow my client to manage the subscriptions from Stripe’s back-end while offering his customers the ease-of-use that comes from a WooCommerce front-end.

I went through iteration after iteration, changed countless settings, chased that rabbit as far down the rabbit hole as I could go. Multiple plug-ins. Multiple configurations.

Nothing. Nada. Couldn’t get it to work.

I had to admit defeat.

And, in the end, my client had to shell out $199 for the “official” WooCommerce Subscriptions plug-in. And guess what? It works! Out of the box!

It was truly a case of “you get what you pay for”. I spent hours trying to get the free plug-ins to do something that they simply couldn’t do and honestly I’m going to take the hit for some of those hours – no sense in billing my client for the time it took to experiment with the free options. Chalk it up as a learning experience.

That’s not to say that all free WordPress plug-ins are bad, or that paid options are always better. But if you’re a WordPress designer/developer, don’t forget that sometimes it might cost you and your client more in the time and labor it takes to force workarounds for free plug-ins when you could just pay for the one that works right away.