ai-ml-security icon indicating copy to clipboard operation
ai-ml-security copied to clipboard

What do we want to do in 2025?

Open TheFoxAtWork opened this issue 1 year ago • 6 comments

  • Information exchange still nice to be done here
  • Building on the Model Signing project’s work.
  • from Jay - if we could iron out what security efforts look like that are developing
    • Smaller language models, “open models” being used.
    • Thinking about supply chain security efforts - how developers pull off datasets, developing and using smaller language models, developing open model systems. What are those security efforts from an openssf perspective.
    • How this can be used for other organizations? What is the open source elements those organizations can take advantage of?
    • Things have improved we’re in a position we can do this.
    • Best Practices, pipeline security, supply chain transparency
    • Vulnerabilities in ML - here or joint with the vuln wg
    • What is the overlap of AIML security with the other WGs in OpenSSF, what should we be engaging with them on.

Please add your ideas here so the group can create distinct issues for items we choose to pursue.

“definition of done” is when individual issues have been created and prioritized.

Please have any additional items added here by January 19th 2025

TheFoxAtWork avatar Jan 06 '25 18:01 TheFoxAtWork

On the supply chain security/transparency track, I'm thinking of how we can adapt SLSA for ML. We can build on top of model signing, sign datasets, create SLSA-aware ML training pipelines that practitioners can use with minimal changes to their workflows

mihaimaruseac avatar Jan 07 '25 19:01 mihaimaruseac

Andrey Shorov, Elif Soykan and I want to produce the following (with the rest of the WG help fi you'd all like):

  • Q1: White paper: MLSecOps https://github.com/ossf/ai-ml-security/issues/16
    MLSecOps - Google Drive. Using the Ericsson reference architecture, how OWASP ML top ten are prevented, where current tools are generated/leveraged (e.g. OpenSSF tools like SLSA, Model card signing, generate an AIBOM, and where there are gaps) -Q2: White paper: LLMSecOps: Using a combination of OPEA reference architecture and https://air-governance-framework.finos.org/ Build out how OWASP LLM top ten are prevented, where current tools are generated/leveraged (e.g. .g. OpenSSF tools like SLSA, Model card signing, generate an AIBOM, and where there are gaps?

sevansdell avatar Jan 16 '25 00:01 sevansdell

Something that came up in the Model Signing SIG, that is out of scope of the SIG, but could be an output of the AIML WG: provide input on model card metadata. Potential sync with Open Oasis Data Provenance standard, LF Model Openness Framework, and CosAI RFC Supply Chain work group

sevansdell avatar Feb 06 '25 00:02 sevansdell

Do we/should we have any advice for developers leveraging deepseek.ai to apply OpenSSF tools/concepts for security?

sevansdell avatar Feb 06 '25 01:02 sevansdell

I like both of these ideas!

mihaimaruseac avatar Feb 06 '25 16:02 mihaimaruseac

What does a security model of an agentic architecture look like? Much of this is OSS in the tool chain: langchain, langgraph/knowledge graphs, APIs. Where are their architecture choices developers and enterprises may need to understand that introduce more risk/opportunity for compromise?

sevansdell avatar Feb 06 '25 19:02 sevansdell

Closing this and starting a new one for 2026.

mihaimaruseac avatar Dec 15 '25 18:12 mihaimaruseac