Salesforce is an amazing tool. It’s pushed the boundary of what people can expect from a CRM and been an amazing titan of a player in the CRM marketplace.
Until very recently, Blackbaud’s Raiser’sEdge (RE) was the defacto standard for nonprofits. Many mature nonprofits had 20+ years of RE data. They never had a dedicated ‘administrator’ for RE – it was usually someone in the development department who was making changes for the now. This can lead to all sorts of data quality management issues (another topic in and of itself). My organization made the decision 18+ months ago to move from RE to NPSP. I am going to try to share our experience and process, lessons learned, and takeaways/solutions to obstacles along the way. Hopefully there will be a nugget of info in this series that will assist another nonprofit on their journey. Look for the “Road to NPSP” category here.
Here are the quick stats on our org :
2 Full time IT people
10 Development Staff
4 Other departments’ data and business process onboarding
Ran into a request to create a formula field which would display a giving society badge or image on a household/account record.
Requirements: Household can be a member of none, one, two or three possible giving circles. Giving circles are defined by three separate fields in the record.
So I have the possible combinations of:
No giving circle
Giving Circle A
Giving Circle B
Giving Circle C
Giving Circle A&B&C
Giving Circle B&C
Giving Circle A&C
Giving Circle A&B
I’m pretty new to Salesforce so I did a fair amount of research but couldn’t find many examples like this one to work from so I thought I’d share my solution to the problem (update: It’s working great and they’re thrilled). Please note that this may be the absolute WORST way to write this. I take no responsibility for the results. If you’re looking for how get the images uploaded and their resource URL: https://help.salesforce.com/articleView?id=000327122&type=1&mode=1
I ran into this problem during my first CI/CD functional testing using our GitLab environment. At some point, the system was updated and the AutoDevops configuration for every repository was enabled. It so happens that we were also testing the Kubernetes integration, and created a shared runner to deal with Docker container deployment in our OpenShift cluster. Fast-Forward pass the failures and bad house cleaning and what you find today is approximately 100,000 records of failed automatic deployments headed for a shared runner that doesn’t exist.
I didn’t find a way to auto-select and clear the records and since the runner was deleted some time ago, my only remediation was to select the delete option to the side of each record. Even if I did have time for 10,000 mouse clicks, I refuse to do that on principle.
Note: Do whatever safeguard method you prefer before trying this; snapshots of the virtual machine, application database backups, etc… This is just how I fixed my problem and in no way a guarantee to not break something else.
After some research, in which, I looked for rake tools, migration files, etc, I found that there was a way to get into the Database console.
$ gitlab-rails dbconsole
Inserting this command at the root prompt on your Gitlab instance will drop you into the database. From here, you are basically working in the Gitlab database, so be careful. After getting into the database, I took a look at the tables, using \d, which is standard commands for my chosen PostgreSQL back end.
List of relations
Schema | Name | Type | Owner
public | abuse_reports | table | gitlab
public | abuse_reports_id_seq | sequence | gitlab
public | appearances | table | gitlab
public | appearances_id_seq | sequence | gitlab
public | application_setting_terms | table | gitlab
public | application_setting_terms_id_seq | sequence | gitlab
public | application_settings | table | gitlab
public | application_settings_id_seq | sequence | gitlab
public | approval_merge_request_rule_sources | table | gitlab
public | approval_merge_request_rule_sources_id_seq | sequence | gitlab
public | approval_merge_request_rules | table | gitlab
public | approval_merge_request_rules_approved_approvers | table | gitlab
public | approval_merge_request_rules_approved_approvers_id_seq | sequence | gitlab
public | approval_merge_request_rules_groups | table | gitlab
public | approval_merge_request_rules_groups_id_seq | sequence | gitlab
public | approval_merge_request_rules_id_seq | sequence | gitlab
public | approval_merge_request_rules_users | table | gitlab
From there I found a table that was likely relevant to my view; ci_runners, so I ran a simple all records select to verify that the records in the view matched the information in the database table.
gitlabhq_production=> select * from ci_runners;
Bingo!!! Now that I had the table, I wanted to make sure that removing the records didn’t impact the application negatively, so I took a look at all the reference keys contained within that ci_runners.
In my case, there was no associated data, so with my snapshot running, I deleted the rows.
gitlabhq_production=> select * from clusters_applications_runners;
gitlabhq_production=> select * from ci_runner_namespaces;
gitlabhq_production=> delete from ci_runners where ip_address = 'XXX.XXX.XXX.XXX';
This cleared up the view and at the point of this post, after deploying a new runner and running some dedicated pipeline jobs, I have seen no negative impacts. This may not be the proper way to approach this solution, but it definitely saved me from clicking a button 94,514 times.