Trustworthy ML Initiative (TrustML)

www.trustworthyml.org

As machine learning (ML) systems are increasingly being deployed in real-world applications, it is critical to ensure that these systems are behaving responsibly and are trustworthy. To this end, there has been growing interest from researchers and practitioners to develop and deploy ML models and algorithms that are not only accurate, but also explainable, fair, privacy-preserving, causal, and robust. This broad area of research is commonly referred to as trustworthy ML. While it is incredibly exciting that researchers from diverse domains ranging from machine learning to health policy and law are working on trustworthy ML, this has also resulted in the emergence of critical challenges such as information overload and lack of visibility for work of early career researchers. Furthermore, the barriers to entry into this field are growing day-by-day -- researchers entering the field are faced with an overwhelming amount of prior work without a clear roadmap of where to start and how to navigate the field. To address these challenges, we are launching the Trustworthy ML Initiative (TrustML) with the following goals: 1. Enable easy access of fundamental resources to newcomers in the field. 2. Provide a platform for early career researchers to showcase and disseminate their work. 3. Encourage discussion and debate on the latest work on trustworthy ML. 4. Develop a community of researchers and practitioners working on topics related to trustworthy ML.

Read more

Reach decision makers at Trustworthy ML Initiative (TrustML)

Lusha Magic

Free credit every month!

As machine learning (ML) systems are increasingly being deployed in real-world applications, it is critical to ensure that these systems are behaving responsibly and are trustworthy. To this end, there has been growing interest from researchers and practitioners to develop and deploy ML models and algorithms that are not only accurate, but also explainable, fair, privacy-preserving, causal, and robust. This broad area of research is commonly referred to as trustworthy ML. While it is incredibly exciting that researchers from diverse domains ranging from machine learning to health policy and law are working on trustworthy ML, this has also resulted in the emergence of critical challenges such as information overload and lack of visibility for work of early career researchers. Furthermore, the barriers to entry into this field are growing day-by-day -- researchers entering the field are faced with an overwhelming amount of prior work without a clear roadmap of where to start and how to navigate the field. To address these challenges, we are launching the Trustworthy ML Initiative (TrustML) with the following goals: 1. Enable easy access of fundamental resources to newcomers in the field. 2. Provide a platform for early career researchers to showcase and disseminate their work. 3. Encourage discussion and debate on the latest work on trustworthy ML. 4. Develop a community of researchers and practitioners working on topics related to trustworthy ML.

Read more
icon

Country

icon

State

Massachusetts

icon

City (Headquarters)

Boston

icon

Employees

1-10

icon

Founded

2020

icon

Social

  • icon

Employees statistics

View all employees

Potential Decision Makers

  • Co - Founder

    Email ****** @****.com
    Phone (***) ****-****
  • Co - Founder

    Email ****** @****.com
    Phone (***) ****-****
  • Co - Founder

    Email ****** @****.com
    Phone (***) ****-****

Reach decision makers at Trustworthy ML Initiative (TrustML)

Free credits every month!

My account

Sign up now to uncover all the contact details