Skip to main content

Spatial Front, Inc.

B-421561.14 Apr 05, 2024
Jump To:
Skip to Highlights

Highlights

Spatial Front, Inc. (SFI), a small business located in McLean, Virginia, protests the decision of the Department of Health and Human Services, Centers for Medicare and Medicaid Services (CMS), not to select its quotation for the establishment of a blanket purchase agreement (BPA) under request for quotations (RFQ) No. 75FCMC21Q0013, for agile collaboration and modernization endeavors. The RFQ sought to establish multiple BPAs with vendors holding contracts under General Services Administration Multiple Award Schedule special item number (SIN) 54151S, for information technology professional services. The protester challenges the agency's evaluation and award decisions.

We deny the protest.
View Decision

DOCUMENT FOR PUBLIC RELEASE
The decision issued on the date below was subject to a GAO Protective Order. This redacted version has been approved for public release.

Decision

Matter of: Spatial Front, Inc.

File: B-421561.14

Date: April 5, 2024

Jonathan T. Williams, Esq., Katherine B. Burrows, Esq., Lauren R. Brier, Esq., Patrick T. Rothwell, Esq., and Dozier L. Gardner Jr., Esq., Piliero Mazza, PLLC, for the protester.
Amy L. O’Sullivan, Esq., Zachary H. Schroeder, Esq., and David H. Favre III, Esq., Crowell & Moring LLP, for Softrams, LLC; Richard P. Rector, Esq., Dawn E. Stern, Esq., and Christie M. Alvarez, Esq., DLA Piper LLP- US, for Nava Public Benefit Corporation; Stephen L. Bacon, Esq., Alexandria Tindall Webb, Esq., and Cindy Lopez, Esq., Rogers Joseph O’Donnell, PC, for Flexion, Inc.; and Ryan C. Bradel, Esq., Chelsea A. Padgett, Esq., and Camille L. Chambers, Esq., Ward & Berry, PLLC, for Coforma, LLC, the intervenors.
Pamela R. Waldron, Esq., and Christopher A. Monsey, Esq., Department of Health and Human Services, for the agency.
Heather Weiner, Esq., and Jennifer D. Westfall-McGrail, Esq., Office of the General Counsel, GAO, participated in the preparation of the decision.

DIGEST

Protest challenging agency’s technical evaluation is denied where the evaluation was reasonable and consistent with the terms of the solicitation and any differences in the evaluation of quotations stemmed from the vendors’ different offerings.

DECISION

Spatial Front, Inc. (SFI), a small business located in McLean, Virginia, protests the decision of the Department of Health and Human Services, Centers for Medicare and Medicaid Services (CMS), not to select its quotation for the establishment of a blanket purchase agreement (BPA) under request for quotations (RFQ) No. 75FCMC21Q0013, for agile collaboration and modernization endeavors. The RFQ sought to establish multiple BPAs with vendors holding contracts under General Services Administration Multiple Award Schedule special item number (SIN) 54151S, for information technology professional services. The protester challenges the agency’s evaluation and award decisions.

We deny the protest.

BACKGROUND

On June 21, 2021, the agency issued the RFQ to vendors holding contracts under General Services Administration Multiple Award Schedule SIN 54151S, for information technology professional services, using the procedures of Federal Acquisition Regulation (FAR) subpart 8.4. Agency Report (AR), Tab 3A, RFQ at 1, 3. The solicitation was set aside for small businesses and anticipated the establishment of multiple BPAs contemplating the issuance of fixed-priced, time-and-material, and labor-hour call orders for a 5-year ordering period.[1] Id. Award was to be made on a best-value tradeoff basis, considering price and the following four non-price factors, of equal importance: (1) design demonstration; (2) development, security, and operations (DevSecOps) case study; (3) technical challenge; and (4) corporate capabilities.[2] Id. at 21. The solicitation provided that all evaluation factors other than price, when combined, are significantly more important than price. Id. at 20.

Initial Evaluation and Award Decision

The procurement was conducted in three phases using an advisory “down select” methodology. Contracting Officer’s Statement (COS) at 2. The agency received 63 phase I quotations by the July 2021 due date; 36 phase II quotations by the March 16, 2022 due date; and 22 phase III quotations by the August 15, 2022 due date. Id. After evaluation by a technical evaluation panel (TEP), the contracting officer, who was also the source selection authority, conducted a comparative assessment of the quotations and performed a best-value tradeoff, finding that there were eight quotations that provided the best value to the government. AR, Tab 23, Post Negotiation Memorandum at 10.

Consistent with the terms of the solicitation, which provided that after the agency had selected the apparent successful contractors, it could engage in communications solely with those firms to address any remaining issues, the agency held exchanges and received quotation revisions from the eight vendors identified as the “best-suited.” COS at 6. After considering the quotation revisions submitted by the eight apparent successful contractors, the contracting officer confirmed his selection decision that these eight vendors provided the best value to the government. AR, Tab 21A, Revised Source Selection Decision (SSD) at 3-4.

In March of 2023, the agency established BPAs for the solicited requirement with the eight vendors that had been identified as the apparent successful contractors.[3] Memorandum of Law (MOL) at 4. Five firms, including SFI, filed protests with our Office challenging the propriety of the agency’s actions. Id. Those protests raised a variety of challenges to the agency’s evaluation of quotations; the agency’s conduct during exchanges; and the reasonableness of the agency’s source selection decision. In response, CMS advised that it was taking corrective action to reevaluate quotations and make a new award decision.[4] Our Office dismissed the protests, including SFI’s, as academic.[5] Spatial Front, Inc., B 421561.2, B-421561.7, June 27, 2023 (unpublished decision).

Corrective Action, Reevaluation, and New Award Decision

The agency’s corrective action involved a reevaluation under the technical evaluation factors. MOL at 4; COS at 2-4, 8. The agency’s evaluation included the assignment of positives and negatives under each evaluation factor. Following the agency’s reevaluation, the agency assigned SFI and the eight vendors that had been identified as the apparent successful contractors the following ratings:[6]

Vendor

Design Capabilities

DevSecOps Case Study

Technical Challenge

Corporate Capabilities

Price

Bellese

High Confidence

High Confidence

High Confidence

High Confidence

$98,731,573

Flexion

Moderate Confidence

High Confidence

High Confidence

High Confidence

$97,943,150

Softrams

Moderate Confidence

High Confidence

High Confidence

High Confidence

$85,500,990

Coforma

High Confidence

High Confidence

High Confidence

Moderate Confidence

$115,257,393

Dynanet

High Confidence

High Confidence

Low Confidence

High Confidence

$94,897,286

Nava

High Confidence

High Confidence

Moderate Confidence

High Confidence

$109,154,358

Octo

Moderate Confidence

Moderate Confidence

High Confidence

High Confidence

$84,438,113

Oddball

High Confidence

Moderate Confidence

High Confidence

Moderate Confidence

$100,098,998

SFI

Moderate Confidence

Moderate Confidence

Low Confidence

High Confidence

$74,818,454

 

AR, Tab 21A, Revised SSD at 1-2; COS at 7-8.

After evaluating quotations, CMS again determined that the quotations submitted by the eight apparent successful contractors provided the best value to the government. AR, Tab 21A, Revised SSD at 3-4. Based on the reevaluation, the agency selected the same eight vendors for the establishment of BPAs. Id. On December 26, 2023, the agency notified SFI that its quotation had not been selected for establishment of a BPA. AR, Tab 25A, Unsuccessful Vendor Notification. After receiving a brief explanation on December 28, SFI filed a protest with our Office on January 5, 2024. AR, Tab 25B, Brief Explanation.

DISCUSSION

SFI challenges numerous aspects of the agency’s evaluation of quotations. SFI first argues that the agency unreasonably assigned negatives to SFI’s quotation under the design demonstration and technical challenge factors. Next, SFI alleges the agency engaged in disparate treatment under the DevSecOps case study and technical challenge factors because it failed to give SFI equal credit for aspects of its quotation, which the protester asserts, were substantially identical to elements of the awardees’ quotations for which the awardees’ received positives. SFI further contends that it should have received higher confidence ratings under the design demonstration, DevSecOps case study, and technical challenge factors based on the number of positives and negatives assessed for each factor. Finally, SFI argues that the agency failed to conduct a reasonable best-value tradeoff. For the reasons discussed below, none of the protester’s arguments provide a basis to sustain the protest.[7]

Where an agency conducts a competition for the establishment of a BPA under FAR subpart 8.4, we will review the agency’s actions to ensure that the evaluation was conducted reasonably and in accordance with the solicitation and applicable procurement statutes and regulations. Citizant, Inc.; Steampunk, Inc., B-420660 et al., July 13, 2022, 2022 CPD ¶ 181 at 5. In reviewing an agency’s evaluation, we will not reevaluate quotations; a protester’s disagreement with the agency’s judgments does not establish that the evaluation was unreasonable. Id.; Digital Sols., Inc., B-402067, Jan. 12, 2010, 2010 CPD ¶ 26 at 4.

Evaluation of Negatives

SFI challenges two negatives assigned to its quotation under the design demonstration factor and three negatives assigned under the technical challenge factor.

Design Demonstration

Under the design demonstration factor, the agency assigned SFI’s quotation three positives and two negatives. The first negative was for failing to demonstrate insights from user research. The second negative was for failing to explain in sufficient detail how research was leveraged to influence design. SFI challenges both findings, arguing that CMS’s evaluation was unreasonable because the agency overlooked information that was included in its quotation. The agency asserts that its assessment of the negatives was reasonable because SFI’s quotation was too general and lacked specific detail. As discussed below, we find no merit to the protester’s arguments.

Under the design demonstration factor, vendors were instructed to submit a design case study, consisting of one or more projects, demonstrating their “design capabilities by showcasing design work[.]” RFQ at 12. Each vendor’s case study was to place specific “focus on the process and artifacts[8] developed.” Id. A YouTube video submission was to accompany each vendor’s case study, and “demonstrate[] the products or services contained within the case study.” Id. The entire body of work was to demonstrate each vendor’s use of the three key design techniques, including as pertinent here, user research/generative research (i.e., research work done to understand the users and define the problem). RFQ at 13.

In evaluating under this factor, the solicitation provided that the agency would take into consideration a multitude of items including but not limited to: any risk identified in the vendor’s approach/quotation, potential benefits of the vendor’s approach, and innovations demonstrated in the vendor’s approach. Id. at 21.

SFI submitted one design case study and an associated YouTube video. AR, Tab 5B, SFI Quotation (Factor 1) at 1. During the reevaluation of SFI’s quotation, the TEP assessed the two negative findings noted above--one for failing to demonstrate insights from user research, and the other for failing to demonstrate how the research was leveraged to influence design. AR, Tab 6B, TEP Eval. (Factor 1) at 4-5. With regard to the first negative finding under user research/generative research, the TEP found that, although SFI’s quotation “[DELETED],” SFI does “not provide further detail to make sufficiently clear what insights were gathered from research.” Id. at 4; TEP Statement at 1. The TEP concluded that “[t]his lack of detail lowers CMS’[s] confidence in the [vendor’s] ability to effectively gather user insights as a result of a research process.” AR, Tab 6B, TEP Eval. (Factor 1) at 4.

The protester disagrees with the agency assessment of this negative and contends that its quotation did in fact “demonstrat[e] . . . the insights gathered from user research[.]” Protest at 13. In support of this argument, SFI cites from sections of its quotation, which it asserts, contains the detail required. Protest at 13-17. For example, as relevant here, the protester explains that “the project referred to in [SFI’s] video and the design case study [DELETED].”[9] Protest at 13 (citing AR, Tab 5B, SFI Quotation (Factor 1) at 2, 4; AR, Tab 5A, SFI Video Transcript at 2). The protester then identifies several sections of its quotation, which it contends, “explained that the [DELETED].”[10] Protest at 15. The protester maintains that “[t]his is hardly a bare statement lacking in factual detail, context, or rationale.” Supp. Comments at 3.

In response, the contracting officer and TEP explain that they looked at the referenced portions of SFI’s quotation during the evaluation and found them lacking. COS at 13; TEP Statement at 1. For example, the contracting officer states that “[w]hile SFI discusses basic design conclusions and SFI refers generally to processes and methods,” its quotation “does not in any way connect what specific information was gathered and/or how that specific insight influenced the design of the system.” COS at 12. In addition, the TEP notes that, in the “User Research/Generative research section of [SFI’s] Case Study,” which discussed [DELETED] and development of the new [DELETED], “the vendor notes [DELETED], for example stating that they [DELETED].’” TEP Statement at 1 (citing AR, Tab 5B, SFI Quotation (Factor 1) at 2. The TEP explains, however, that “[t]hese are common outcomes which could be applied to any design project, and do not give meaningful insight into any specific users or design functionality they may require.” TEP Statement at 1. Further, the contracting officer notes that “[w]hile [SFI] refers to a set of personas to assert that they made research insights clear (Figure 3, pg. 3, Case Study)[,] the TEP could not evaluate them because they are overlapping in a manner that obstructs viewing, and the text is too small to read, even when enhanced.”[11] COS at 12; TEP Statement at 2.

Based on the record, we find nothing unreasonable regarding the agency’s assessment of the negative finding. As discussed above, the TEP and contracting officer identify numerous instances in SFI’s quotation where they concluded that SFI failed to provide adequate detail to make it sufficiently clear what insights were gathered from research. COS at 12; TEP Statement at 2. The TEP also states that it reviewed SFI’s quotation, including the sections quoted in the protest, but found that these sections did not provide further detail to make sufficiently clear what insights were gathered from research. TEP Statement at 1. While in response, the protester again points to an example in its quotation regarding [DELETED], which the protester asserts, “gave [DELETED] insights in how to [DELETED],” Supp. Comments at 3, the protester does not sufficiently refute any of the instances pinpointed by the agency in which SFI’s quotation lacked detail or clarity or otherwise demonstrate how this cited example remedies those problems. Ultimately, the record reflects that the TEP evaluated SFI’s quotation, including the quoted sections, and found that SFI’s quotation failed to provide information and details specific to the [DELETED] project, and instead used high level generalities that could apply to any project. To the extent SFI contends that its quotation submission was adequate or should have been evaluated differently, the protester’s disagreement with the agency’s evaluation provides no basis to sustain the protest. Digital Sols., Inc., supra.

Similarly, we find no merit to the protester’s challenge of the second negative assigned to its quotation under the design demonstration factor. As discussed above, in response to this factor, SFI’s quotation provided one design case study and an accompanying YouTube video for the [DELETED] project. The TEP assessed a negative to SFI’s quotation because the vendor “did not explain in sufficient detail how research was leveraged to influence design.” AR, Tab 6B, TEP Eval. (Factor 1) at 4; TEP Statement at 2. The TEP explained that SFI’s quotation stated that “[b]ased on the initial user research and user needs definitions, [DELETED].” Id. at 4-5. The TEP found, however, that SFI does “not explain in sufficient detail how research and outcomes such as resulting artifacts specifically informed these UX/design activities, reducing CMS’s confidence in the Offeror’s ability to apply research and outcomes to UX/design.” Id. at 5.

The protester argues that the agency’s assessment of this weakness was unreasonable and that its quotation adequately addressed how research was leveraged to influence design.

Based on our review of the record, we find nothing unreasonable regarding the agency's assessment of the negative. Although the protester asserts that SFI’s quotation adequately “explain[ed] in sufficient detail how research was leveraged to influence design,” Protest at 17, it is not apparent from SFI’s quotation or the protester’s submissions to our Office in connection with this protest, how the cited provisions in the quotation address the agency’s specific concern about how research and outcomes, such as resulting artifacts, specifically informed the UX/design activities listed in SFI’s quotation (i.e., [DELETED]). In this regard, although the protester block quotes paragraphs from SFI’s quotation, it is not clear the text includes any information addressing the agency’s concern, and the protester does not provide any explanation to demonstrate how the cited text, in fact, responds to the assessed weakness. As our Office has recognized, vendors are responsible for submitting well-written quotations with adequately detailed information that allows for a meaningful review by the procuring agency. Riva Solutions, Inc., B‑417858.2, B-417858.10, Oct. 29, 2020, 2020 CPD ¶ 358 at 8. Here, the protester failed to do so. To the extent SFI believes that its response adequately addressed the agency’s concerns, the protester’s disagreement with the agency’s evaluation does not demonstrate that the evaluation was unreasonable or otherwise provide a basis to sustain the protest. Digital Sols., Inc., supra. In sum, we find nothing unreasonable regarding the two negatives assessed to SFI’s quotation under the design demonstration factor. Accordingly, this protest allegation is denied.

Technical Challenge

The protester challenges the agency’s evaluation of its quotation under the technical challenge factor. For this factor, the agency assessed SFI’s quotation three negatives and one positive, resulting in an overall rating of low confidence. The protester argues that the agency’s assignment of the three negatives was unreasonable. We find no merit to the protester’s arguments and discuss one representative example below.

Factor 3 involved a technical challenge that required vendors to design and implement Amazon Web Services (AWS)-hosted application programming interfaces (APIs) to flag patients with chronic conditions impacted by disasters. See Tab 18A, RFQ, amend. 009 at 17-21. To facilitate the agency’s review and evaluation, the RFQ required vendors to submit a README.MD file with a script (software code), or the location of the script in the vendor’s GitHub repository,[12] that would install, deploy, and test the solution. Id. at 18. The RFQ required that the script successfully install and configure the AWS environment, compile (as necessary) and deploy the solution, run tests associated with the solution, and remove the installed solution from the AWS environment. Id.

In evaluating SFI’s quotation, the agency found that SFI failed to propose a script that satisfied these requirements. AR, Tab 20A, TEP Report (Factor 3) at 6. Specifically, the TEP found that SFI’s “README.MD file does not include a script for installing and configuring the AWS environment or a script for compiling and deploying the solution as required”; instead, the README.MD file includes “hundreds of lines of instruction and images for the manual completion of these steps by an administrator.” Id. The TEP explained that providing a script “demonstrates the ability to automate tedious manual tasks, reducing the risk of mistakes and increasing the frequency with which changes to the system can be implemented.” Id. The TEP concluded that “[t]he lack of such scripts reduces the [g]overnment’s confidence in the vendor’s ability to rapidly, reliably deploy AWS-hosted projects.” Id. The TEP stated that “[t]his may result in longer times for producing useful features and a higher probability of misconfigured components during release” and that “[t]hese issues could result in delays of useful features to customers, or the inability for Medicare customers or employees to correctly use a system.” Id. As a result, the TEP assigned a negative to SFI’s quotation. Id.

The protester contends that the agency’s assessment of this weakness was unreasonable, arguing that the “RFQ did not instruct vendors to include scripts in the README.MD file,” but rather, “instructed vendors to include the location of their scripts in the README.MD file.” Protest at 34 (citing AR, Tab 18A, RFQ amend. 009 at 18) (“[T]he [vendor’s] repository shall include a README.MD file which includes: The location of a script in the [vendor’s] GitHub repository for” various functions.). In this regard, SFI states that the “README.MD file is intended to include instructions,” the purpose of which “is to walk the user through the process of installing and utilizing the solution.” Id. at 35-36. The protester contends that its quotation included such instructions and therefore met the requirements of the RFQ.

The agency does not disagree that vendors were permitted to include instructions with the location of their scripts in the README.MD file. TEP Statement at 18. In response to the protest, the TEP explains that the “negative aspect identified is not for the inclusion of instructions [in the README.MD file], but the manual nature of these instructions.” Id. In this regard, the TEP explains that “[c]omprehensive documentation in general is useful but part of effective documentation is creating streamlined processes for development so that both the steps and the language around them are only as complex as necessary.” Id. The TEP states that “[w]hat [SFI] provided in [its] documentation was not streamlined and significantly reduces the documentation’s usefulness.” Id. The TEP further explains that “[a] process including so many manual steps introduces significant risk of user error if any one of them is missed.” Id. The TEP noted that “[i]n this case, [SFI] required steps including creating a remote Windows desktop environment and downloading a remote desktop client to access a deployed website,” which the TEP states, is “a highly unusual process when the challenge was to deploy a web API inherently designed to not require a desktop environment.” Id. As a result, the TEP states that “the manual nature of the instructions provided reduced the impact of their usefulness.” Id.

We find nothing unreasonable regarding the agency’s assessment of the negative to SFI’s quotation. The record reflects that SFI’s README.MD file included hundreds of lines of instructions that required multiple, manual completion steps, which the TEP found was an overly complicated process that created problems for the TEP’s ability to access and deploy the solution, and which the agency found, creates significant execution risks for the agency. TEP Statement at 17; COS at 23. SFI does not rebut the agency’s criticism regarding the manual nature of its solution; instead, the protester asserts that our Office should disregard the agency’s position as “post hoc.” Comments at 21. The record reflects, however, that the agency’s position tracks with concerns expressed contemporaneously by the TEP during the evaluation, such as SFI’s README.MD file “include[s] hundreds of lines of instruction and images for the manual completion of these steps by an administrator,” which the TEP found “reduces the [g]overnment’s confidence in the vendor’s ability to rapidly, reliably deploy AWS‑hosted projects.” AR, Tab 20A, TEP Report (Factor 3) at 6. In this case, the explanation provided by the TEP and contracting officer is reasonable and consistent with the contemporaneous evaluation, and the protester’s arguments represent nothing more than disagreement with the agency’s judgment and do not provide a basis to conclude the agency’s evaluation was unreasonable. This protest ground is denied.

Disparate Treatment

The protester contends that CMS should have assessed its quotation with additional positives under the DevSecOps case study and technical challenge factors. SFI argues that the agency engaged in disparate treatment in its evaluation of SFI’s quotation under these two factors because multiple awardees were assessed positives for attributes that SFI claims were also present in its quotation, but for which it was not credited with positives.[13] The agency argues that the difference in evaluations was based on differences in the vendors’ quotations and that aspects of the protester’s quotation did not exceed the RFQ’s requirements. We find the protester’s arguments provide no basis to sustain the protest and discuss one representative example below.[14]

In conducting procurements, agencies may not generally engage in conduct that amounts to unfair or disparate treatment of competing vendors. Arc Aspicio, LLC; et al., B-412612 et al., Apr. 11, 2016, 2016 CPD ¶ 117 at 13. Where a protester alleges unequal treatment in a technical evaluation, it must show that the differences in ratings did not stem from differences between the vendors’ quotations. See Camber Corp., B‑413505, Nov. 10, 2016, 2016 CPD ¶ 350 at 8.

SFI maintains that two awardees, Dynanet and Octo, received positive findings under the DevSecOps case study factor for open-source components. Protest at 30. Under this factor, vendors were required to submit a “DevSecOps Case Study.” AR, Tab 17A, RFQ amend. 0008 at 14. The solicitation provided that the case study “shall demonstrate the [vendor’s] capabilities: [e]xecuting modern DevSecOps practices” and that the evaluation of this factor “may take into consideration a multitude of items including but not limited to . . . [a]ny risk identified in the [vendor’s] approach/[quotation] submission” and the vendor’s “ability to demonstrate expertise in key areas identified in this factor[.]” Id. at 15, 29.

SFI asserts that its quotation should have received a similar positive because its presented solution included a large number of open-source components. Protest at 30 (citing AR, Tab 12C, SFI Quotation (Factor 2) at 1) (listing use of the following open-source components in SFI’s quotation: [DELETED]). The protester contends that it was unreasonable for “this extensive list [of] open-source components to go unnoticed during SFI’s evaluation.” Protest at 30.

In response to the protest, the TEP notes that “[u]se of open source technologies is a requirement in this BPA as per both the [ ] BPA [performance work statement] and the [Information Systems Group] (ISG) Working Environment provided as part of the solicitation.” TEP Statement at 11; AR, Tab 1B, RFQ, Performance Work Statement at 7 (requiring the “[provi[sion of] customer-friendly open source solutions that provide ease of use for non- technical Government users” and the “[u]se of open source products over [commercial-off-the-shelf] COTS solutions whenever possible”); AR, Tab 1C, RFQ attach. 3, ISG Working Environment at 11 (“ISG promotes the use of open source technologies and methodologies to the maximum extent possible.”). The TEP explains that “while SFI provided a list of open source technologies, the overall presentation and execution of what was provided in their source code repository was a relatively basic representation of how one might use their selected set of tools.” Id. In this regard, the TEP explains that “SFI met expectations by providing a list of open source technologies that are fairly common and not exceptional.” Id. In contrast, the TEP notes that, with regard to the two vendors that received positives, “open source technologies made up the preponderance of both [vendors’] solution[s], maximizing the benefits open source technologies offer, including reduced risk of vendor/technology lock in and flexibility to use the most suitable and modern tools.” Id. at 11-12 (noting that the solution for Dynanet “used open source tools entirely” and for Octo, “the core architecture of Octo’s solution was based on open source programs.” Id. at 11. The TEP states that SFI’s solution, on the other hand, “used a mix of both open source and closed source tools,” which the TEP found was “not impactful enough to warrant a positive finding for this aspect.” Id. at 12.

We find nothing unreasonable regarding the agency’s evaluation. The protester has failed to demonstrate how its quotation exceeded the solicitation requirement or that the positives received by the other two vendors did not stem from differences between the quotations. Camber Corp., supra. In sum, this argument does not provide a basis to sustain the protest.

Adjectival Ratings

SFI contends that it should have received higher confidence ratings under the design demonstration, DevSecOps case study, and technical challenge factors based on the number of positives and negatives assessed for each factor during the evaluation. Protest at 17-20, 26-29, 48-49. We find no merit to the protester’s arguments.

For example, for the DevSecOps case study factor, SFI argues that it should have received a rating of high confidence instead of a rating of moderate confidence because its quotation was assessed five positives and no negatives. Protest at 26. In addition, the protester alleges that the agency’s failure to assign its quotation a high confidence rating for this factor is inconsistent with the agency’s assessment of ratings to its quotation under the design demonstration factor (where three positives and two negatives resulted in a moderate confidence rating) and corporate capabilities factor (where three positives and no negatives resulted in a high confidence rating).[15] This aspect of SFI’s protest relies on a faulty premise--that the agency assigned confidence ratings by merely counting the number of positives and the number of negatives under a factor. As we have consistently stated, evaluation scores, whether they be numeric or adjectival, are only guides to intelligent decision making. See Aptim-Amentum Decommissioning, LLC, B-420993.3, Apr. 26, 2023, 2023 CPD ¶ 107 at 6-7. The scores assigned are not dispositive metrics for an agency to express a quotation’s merit. What is important is the underlying substantive merits of the quotation as embodied in, or reflected by, the scores, along with the underlying narrative description that supports the assignment of those scores. Id.

As noted, we find no merit to SFI’s challenges to the positives and negatives assessed to its quotation under any of the evaluation factors. Nor has SFI shown that the agency’s underlying narrative findings were inaccurate. Since the record accurately reflects the underlying substantive findings of the agency’s evaluation, SFI’s challenge to the adjectival ratings assigned to its quotation reflects nothing more than disagreement with the scoring of its quotation. Such disagreement does not provide a basis to sustain the protest. We therefore find no merit to this aspect of its protest.

Best-Value

Finally, the protester challenges CMS’s best-value tradeoff. The protester’s primary argument is that the tradeoff was based on a flawed design demonstration, DevSecOps case study, and technical challenge evaluation, as discussed above. Protest at 54-55. Because we find no merit to the protester’s challenges to the agency’s evaluation of quotations, we see no basis to sustain the protester’s derivative challenge to the agency’s best-value decision. See Allied Tech. Grp., Inc., B-412434, B-412434.2, Feb. 10, 2016, 2016 CPD ¶ 74 at 12.

We also find unavailing the protester’s assertion that in conducting the tradeoff analysis, the agency failed to appropriately consider price.

Subpart 8.4 of the FAR provides for a streamlined procurement process with minimal documentation requirements. FAR 8.405-3(a)(7); Sapient Gov’t. Servs., Inc., B‑410636, Jan. 20, 2015, 2015 CPD ¶ 47 at 3 n.2. Where, as here, a price/technical tradeoff is made in a federal supply schedule procurement, the source selection decision must be documented, and the documentation must include the rationale for any tradeoffs made. Sigmatech., Inc., B-415028.3, B-415028.4, Sept. 11, 2018, 2018 CPD ¶ 336 at 11. The extent of such tradeoffs is governed by a test of rationality and consistency with the evaluation criteria. Id. An agency may properly select a more highly rated proposal over a lower price one where it has reasonably concluded that the technical superiority outweighs the difference in price. Deloitte Consulting, LLP, B‑419336.2 et al., Jan. 21, 2021, 2021 CPD ¶ 58 at 14-15.

Here, the SSA’s comparative analysis of the proposals and tradeoff decision were rational, consistent with the evaluation criteria, and well-documented. The record reflects that the SSA performed a detailed comparative analysis of the quotations, noting the important discriminators in his discussion. See AR, Tab 21A, Revised SSD. For example, the agency’s comparison between Coforma and SFI included four pages of analysis of the positives and negatives identified for each vendor. See AR, Tab 21H, SSD (Coforma) at 12-15, 24-25. The SSA performed an extensive trade-off between SFI’s and Coforma’s quotations and found discriminators in favor of Coforma for its design demonstration, DevSecOps case study, and technical challenge, and in favor of SFI for corporate capabilities. Id. The SSA found that SFI’s rating of low confidence under the technical capabilities factor “is not counter-balanced or otherwise lessened as a result of its positive aspects in other areas of SFI’s technical quotation.” Id. at 15.

The SSA therefore determined that “Coforma has more non-price merit than SFI overall as they have more impactful and comprehensive features tha[n] SFI” and “less risk than SFI.” Id. The SSA also noted that Coforma’s total evaluated price “was approximately $40.4 million higher than [SFI’s] total evaluated price for this sample order,” but found that “Coforma had important and impactful distinguishing positive features when compared to [SFI],” and therefore found that “the price premium [for Coforma] is justified.” Id. at 24-25 (also noting that “[d]ue to the critical importance of the [ ] services, I believe this price premium for the Coforma quote over [SFI’s] quote is warranted to provide the higher quality services and reduced risk to performance that Coforma should deliver.”). The agency performed a similarly detailed analysis between SFI and all of the awardees. See AR, Tab 21B, SSD (Octo) at 13-17, 26-28; Tab 21C, SSD (Nava) at 13-16, 27-28; Tab 21D, SSD (Flexion) at 14-19, 32-34; Tab 21E, SSD (Dynanet) at 13-17, 28-29; Tab 21F, SSD (Bellese) at 12-16, 27-29; Tab 21G, SSD (Softrams) at 13-17, 27-28; Tab 21H, SSD (Coforma) at 12-15, 24-25; Tab 21I, SSD (Oddball) at 13-17, 28-31.

As noted above, the protester asserts that the agency’s tradeoff failed to include a meaningful comparative assessment of the vendors’ quotations and unreasonably determined that the higher rated quotations of two awardees (Coforma and Nava) were worth a price premium. Based on our review of the record, we do not agree that the SSA failed to conduct a comparative assessment of the vendors’ quotations or disregarded price in the source selection. Rather, as discussed in detail above, the record reflects that the SSA considered SFI’s lower proposed price, but determined that the quotations of the eight awardees, including Coforma and Nava, were the best value to the government. In sum, although SFI disagrees with the agency’s evaluation, the record demonstrates that at every step in the procurement, the agency considered all of the information submitted by the vendors and available to the agency and issued well-reasoned and rational evaluation reports before making a best-value tradeoff that highlighted key discriminators between the quotations.

The protest is denied.

Edda Emmanuelli Perez
General Counsel

 

[1] The solicitation provided that the agency would identify a “[m]aximum of 10 [a]wards” of capable vendors to compete and provide services under the BPA. RFQ at 1.

[2] The RFQ also requires a section 508 compliance checklist, to be evaluated only for the proposed awardees. AR, Tab 17A, RFQ amend. 008 at 30. Though not at issue in this decision, section 508 refers to the Rehabilitation Act of 1973, as amended, which generally requires that agencies’ electronic and information technology be accessible to people with disabilities. See 29 U.S.C. § 794d.

[3] The agency selected the following eight vendors for the establishment of BPAs: (1) Bellese Technologies, LLC, of Owings Mills, Maryland; (2) Coforma, LLC, of Washington, District of Columbia; (3) Dynanet Corporation, of Elkridge, Maryland; (4) Flexion Inc., of Madison, Wisconsin; (5) Nava Public Benefit Corporation, of Washington, District of Columbia; (6) Octo Metric LLC, of Atlanta, Georgia; (7) Oddball, Inc., of Washington, District of Columbia, and (8) Softrams, LLC, of Leesburg, Virginia. AR, Tab 25A, Notice of Unsuccessful Vendor at 1-2.

[4] The GAO attorneys assigned to the protests conducted an outcome prediction alternative dispute resolution conference with the parties in two of the protests, in which they advised the parties that GAO would likely sustain the protesters’ challenges to the agency’s evaluation under factor 1, demonstration of design capabilities.

[5] After we dismissed the protests, one of the unsuccessful vendors filed a protest challenging the scope of the corrective action, which our Office denied on October 10, 2023. Skyward IT Solutions, LLC, B-421561.10, Oct. 10, 2023, 2023 CPD ¶ 269.

[6] The agency rated quotations under each factor as: high confidence, moderate confidence, low confidence, or no confidence. RFQ at 21-22.

[7] In filing and pursuing this protest, SFI has made arguments that are in addition to, or variations of, those discussed below. While we do not address every issue raised, we have considered all of the protester’s arguments and conclude none furnishes a basis on which to sustain the protest.

[8] An artifact is a byproduct of software development that helps describe the architecture, design, and function of the software. For example, the TEP explains that a persona--which is a “fictional, yet realistic, description of a typical or target user of the product” that is used to “promote empathy, increase awareness and memorability of target users, prioritize features, and inform design decisions”--is an artifact commonly used within the Human-Centered Design (HCD) industry. TEP Statement at 2.

[9] The protester further explains that the “[DELETED].” Protest at 13 (quoting AR Tab 5A, SFI Video Transcript at 2).

[10] For example, SFI cites to the following description of the [DELETED] design project from SFI’s quotation:

[DELETED].

Protest at 14-15 (quoting AR, Tab 5B, SFI Quotation (Factor 1) at 4-5).

[11] Additionally, the TEP states that “the artifacts shown in Figure 3 are not personas, but rather user guides and instructions meant to assist users in navigating the system. (pg. 4, Case Study).” TEP Statement at 2. The TEP further explains that SFI’s “YouTube video (1:09 – 1:44) also provides high-level information, first focusing on a [DELETED] that, while professional-looking, does not reveal user insights” and is “comprised of [DELETED],’ but does not reveal deeper insights specific to the project.” Id. The TEP also states that “the activities mentioned throughout this section of [SFI’s] video are common within the Human-Centered Design (HCD) industry, and CMS would expect any design team to perform them.” Id. The TEP also states that SFI’s “YouTube video also shows a visual of its [DELETED], which is pixelated and unreadable.” Id. The TEP explains that, as articulated in the evaluation, the “lack of detail across this section of the Case Study and YouTube video decreased TEP’s confidence in the vendor’s HCD expertise in the area of User Research / Generative Research.” Id.

[12] GitHub is a web-based interface that uses an open-source version control software (Git) that allows multiple people to make changes to web pages at the same time; a GitHub repository is a platform for storing code and files. An Introduction to GitHub, Digital.gov, available at https://digital.gov/resources/an-introduction-github/#:~:text=What%20is%20GitHub%3F%20GitHub%20is%20a%20web-based%20interface,changes%20to%20web%20pages%20at%20the%20same%20time (last visited Apr. 5, 2024).

[13] Specifically, under the DevSecOps case study factor, the protester asserts that it should have received two additional positives for its use of open source technologies and evidence of app hardening, which reduces the size of the attack surface in the environment. Protest at 30. Under the technical challenge factor, the protester contends that it should have received two additional positives for using automated load and performance testing tools and presenting a detailed decision table. Id. at 50.

[14] To the extent the protester raises these assertions based on a comparison to unsuccessful vendors, SFI cannot demonstrate any competitive prejudice with respect to positives that were assigned to unsuccessful vendors. See, e.g., Environmental Chem. Corp., B-416166.3 et al., June 12, 2019, 2019 CPD ¶ 217 at 6 n.5 (“Even assuming for the sake of argument that the evaluation was disparate, . . . we find no basis to conclude that [the protester] was competitively prejudiced where the alleged disparate evaluation was with respect to other unsuccessful offerors.”).

[15] For the design demonstration factor, SFI’s quotation was assessed three positives and two negatives and received a confidence rating of moderate. Comparing the evaluation of its quotation to that of one of the unsuccessful vendors, SFI asserts that it should have received more credit for the positives assessed to its quotation. Protest at 17. For the technical challenge factor, SFI contends that it should have received a rating of moderate confidence instead of a rating of low confidence based on its assertion that the three negatives assigned to its quotation did not relate to a “core” requirement. Id. at 44.

Downloads

GAO Contacts

Office of Public Affairs