This is the multi-page printable view of this section. Click here to print.
Contributions
- 1: Our priorities
- 1.1: Priority 1: Launch ready4 Minimum Viable Product (MVP) systems model
- 1.2: Priority 2: Maintain ready4
- 1.3: Priority 3: Apply ready4 to undertake replications and transfers
- 1.4: Priority 4: Grow a ready4 community
- 1.5: Priority 5: Extend the scope of the ready4 model
- 1.6: Priority 6: Integrate ready4 with other open source tools
- 2: Contribution types
- 2.1: Provide advice
- 2.2: Contribute code
- 2.3: Fund projects
- 2.4: Undertake projects
- 2.5: Support the ready4 community
- 3: Contributor covenant (code of conduct)
1 - Our priorities
1.1 - Priority 1: Launch ready4 Minimum Viable Product (MVP) systems model
Why?
All our software, regardless of status is supplied without any warranty. However, our views about whether an item of software is potentially appropriate for others to use in undertaking real world analyses can be inferred from its release status. If it is not a production release, we probably believe that it needs more development and testing and better documentation before it can be used for any purpose other than the specific studies in which we have already applied it. Partly for this reason, it is unlikely that any item of our software will be widely adopted until it is available as a production release. We also cannot meaningfully track uptake of our software until it becomes available in a dedicated production release repository. We need a critical mass of modules available as production releases so that they can be combined to model moderately complex systems.
What?
By bringing all our current development version and pipeline libraries to production release, we aim to launch the ready4 Minimum Viable Product (MVP) systems model. The MVP model will comprise our modelling framework plus an initial skeleton of production ready modules for modelling people, places, platforms and programs.
The most important types of help we need with achieving this goal are funding, code contributions, community support and advice.
How?
The main tasks to be completed to bring all of our existing code libraries to production releases is as follows:
-
(For unreleased software) Address all issues preventing public release of code repositories (e.g. fixing errors preventing core functions working, removing all traces of potentially confidential artefacts from all versions/branches of repository, etc.).
-
(For code libraries are implemented using only the functional programming paradigm) Author and test new modules.
-
Write / update unit tests (tests of individual functions / modules for multiple potential uses / inputs that will be automatically run every time a new version of a library is pushed to the main branch of its development release repository).
-
Enhance the documentation that is automatically authored by algorithms from our model authoring tools. This will involve some or all of:
-
minor modifications of function names / arguments / code;
-
updating the taxonomies datasets used in the documentation writing algorithm; and/or
-
updating the documentation authoring algorithms (within the ready4fun and ready4class packages).
-
Adding human authored documentation for the modules contained in each library.
-
(For some libraries) Adding a user-interface.
When?
Our production releases will be submitted to the Comprehensive R Archive Network (CRAN). CRAN does not allow for submitted R packages to depend on development version R packages, so the dependency network of our code-libraries shapes the sequence in which we bring them to production releases.
The planned sequence for bringing current development release code libraries to production releases is:
-
Three of the six framework libraries - the ready4 foundation followed by the ready4show and ready4use authoring tools.
-
The seven module libraries that are sufficiently developed to have been used in real world scientific studies, in the following order: youthvars, scorz, specific, TTU, youthu, mychoice and heterodox.
-
The very early stage bimp library from the modelling programs pipeline,
-
The three remaining computational model authoring tools framework libraries, starting with ready4fun, then ready4class and finally ready4pack.
The planned sequence for bringing unreleased code first to development releases and then to production releases is:
-
The four development module libraries from the modelling places pipeline (the library for synthesising geometry and spatial attribute data, followed by the spatial attribute simulator, the prevalence predictor and user interface libraries).
-
The two module libraries from the modelling people pipeline (the synthetic household creation library and that agent based modelling library).
-
The three module libraries from the modelling platforms pipeline (the primary mental health service discrete event simulation, followed by the early psychosis cohort model, followed by the service system eligibility and referral policy optimisation model).
How quickly we can launch production releases of all our code depends on how much and what type of help we get. Working within our current resources we expect the first of the 23 libraries listed to be released early in 2023 and the last during late 2025. With your help this release schedule can be sped up.
1.2 - Priority 2: Maintain ready4
Why?
A significant limitation of many health economic models is that they are not updated and can become progressively less valid with time. The importance of maintaining a computational model increases if, like ready4, it is intended to have multiple applications and users. As we progressively make production releases to launch theready4 MVP model, we intend that people will start using it. As ready4 becomes more widely used, its limitations (errors, bugs, restrictive functionality and confusing / inadequate documentation) are more likely to become exposed and to require remediation. Addressing such issues needs to implemented skillfully and considerately to avoid unintended consequences on existing model users (e.g. to ensure software edits to fix one problem do not prevent previously written replication code or downstream dependencies from executing correctly). Open source projects like ready4 also need to make changes in response to decisions by third parties - such as edits to upstream dependencies and changes in the policies of hosting repositories and to update citation / acknowledgement information to appropriately reflect new contributors.
What?
All ready4 software needs to be maintained and updated to identify and fix bugs, enhance functionality and usability, respond to changes in upstream dependencies and to conscientiously deprecate outdated code. Open access datasets made available for use in modelling analyses need to be actively curated to ensure they remain relevant to current decision contexts. Decision aids need to be reviewed and updated to ensure they continue to use the most up to date and appropriate modules and input data.
The most important types of help we need with this priority area are funding, code contributions, community support and advice.
How?
The main tasks for the maintenance of framework and model software are to:
-
Appropriately configure and update the settings of the ready4 GitHub organisation and its constituent repositories to facilitate easy to follow and efficient maintenance workflows.
-
Proactively:
-
author ongoing improvements to software testing, documentation and functionality;
-
make archived releases of key development milestones in the ready4 Zenodo community; and
-
submit new production releases to the Comprehensive R Archive Network (CRAN).
- Reactively elicit, review and address feedback and contributions from ready4 community (e.g. bugs, issues and feature-requests).
The main tasks for curating model data collections include:
-
Implementing ongoing improvements and updates to meta-data descriptors of data collections and individual files.
-
Facilitating the linking of datasets to and from the ready4 Dataverse.
-
Reviewing all collections within the ready4 Dataverse to identify datasets or files that are potentially out of date.
-
Creating and publishing new versions of affected datasets with the necessary additions, deletions and edits and updated metadata. Prior versions of data collections remain publicly available.
-
Informing the ready4 community of the updated collections.
The main tasks for curating decision aids include:
-
Monitoring the repositories of the software and the data used by the app for important updates.
-
Deploying an updated app bundle of software and data to a test environment on Shinyapps.io.
-
Testing the new deployment and elicit user feedback.
-
Implementing any required fixes identified during testing.
-
Deploying the updated app to a Shinyapps.io production environment.
-
Informing the ready4 community of the updated decision aid.
When?
Maintenance is an ongoing and current responsibility. Maintenance obligations are expected to grow considerably as we launch more production releases, extend the ready4 model and grow the ready4 community.
1.3 - Priority 3: Apply ready4 to undertake replications and transfers
Why?
In this relatively early stage of ready4’s development, the authoring of new ready4 modules can involve a significant investment of time and skills, an investment that is typically made in the context of implementing a modelling project for a scientific study. However, once authored, these modules may significantly streamline the implementation of modelling analyses that replicate or transfer the studies for which they were developed. For modellers and other researchers, using ready4 for this purpose may provide the highest reward to effort ratio of any contribution to the ready4 community. Network effects also kick in - more replications and generalisations mean more open access data and module customisations available to other users, enhancing the practical utility of ready4.
What?
We plan to demonstrate that studies implemented with ready4 are relatively straightforward and efficient to replicate and transfer. The most important initial types of help we need with achieving this goal are funding, projects, code contributions and advice.
How?
The main tasks for implementing study replications and transfers are:
-
Identify the example study to be replicated or transferred.
-
Review that study’s analysis program:
- do the data used in this program have similar structure / concepts / sampling to the data for which a new analysis is planned?
- are modules used in that program from production release module libraries and do any of them require authoring of inheriting modules to selectively update aspects of module data-structures or algorithms?
-
Create a new input dataset, labelling and (for non-confidential data) storing the data in an online repository (which can be kept private for now).
-
(If new inheriting modules are required) Make a code contribution to create and test new inheriting modules.
-
Adapt the original study’s analysis program to account for differences in input data, model modules and study reporting.
-
Share new new analysis program in the ready4 Zenodo community.
-
Ensure the online model input dataset is made public and submit it as a Linked Dataverse Dataset in the appropriate section of the ready Dataverse.
When?
In most cases, we recommend waiting until production releases of relevant module libraries are available. However, we are currently planning or actively undertaking some initial study analysis transfers using the development versions of our utility mapping and choice modelling module libraries. We are undertaking this work in parallel with testing and, where necessary, extending the required modules. We suggest that, should you believe that any of our development version software is potentially relevant to a study you wish to undertake, you first get in touch with our project lead to discuss the pros / cons and timing of using this software.
1.4 - Priority 4: Grow a ready4 community
Why?
ready4 is open source because we believe that transparent and collaborative approaches to model development are more likely to produce accountable, reusable and updatable models. No one modelling team has the resources or breadth of expertise and diversity of values to adequately address all of the important decision topics in youth mental health systems design and policy. Opportunities for modellers to test, re-use, update and combine each other’s work help make modelling projects more valid and tractable. Models have become increasingly complex, so simply publishing model code and data may have limited impact on improving model transparency. These aretefacts also need to be understood and tested. Clear documentation and frequent re-use in different contexts by multiple types of stakeholder make it more likely that errors and limitations can be exposed and remedied. Decentralising ownership of a model to an active community can help sustain the maintenance and extension of a model over the long term and mitigate risks and bottlenecks associated with dependency on a small number of team members.
What?
Our aim is to enhance the resilience, quality, legitimacy and impact of ready4 by developing a community of users and contributors. The most important initial types of help we need with achieving this goal are funding, community support and advice.
How?
The process of developing the ready4 community involves the following tasks:
-
Creating and recruiting to volunteer advisory structures to elicit guidance on strategic, technical and conceptual topics.
-
Enhancing the ease of use for third parties of existing framework authoring tools.
-
Developing improved documentation and collateral (e.g. video tutorials) for ready4 software and data.
-
Configuring hosting repositories to implement clear collaborative development workflows.
-
Promoting ready4 to potential users and stakeholders.
-
Continually expanding, diversifying and updating the authorship and maintenance responsibilities of all ready4 software.
When?
We plan to begin seeking input into nascent advisory structures during 2023. The speed at which we undertake other activities to grow the ready4 community depends on our success at securing funding to provide required support infrastructure.
1.5 - Priority 5: Extend the scope of the ready4 model
Why?
We hope that once launched, the ready4 MVP systems model will be accountable, reusable and updatable model that can demonstrate its usefulness for addressing some important topics in youth mental health. However, there will inevitably be a much greater number of topics for which that the MVP model lacks the scope to adequately address. The two main scope limitations of the MVP model are expected to be omissions and level of abstraction. Some relevant system features will be ommitted from representation by the MVP model - for example our pipeline of platforms modules does not currently include any planned modules for modelling the operations of digital mental health services or schools. System features that are represented in the MVP model may only have one level of abstraction, which may be either too simple or too complex to be appropriately applied to some modelling goals.
What?
We plan to progressively extend the scope of ready4 and the range of decision topics to which it can validly be applied. The most important initial types of help we need to achieve this goal are funding, projects and advice.
How?
The two main strategies for extending ready4 are to translate existing models and develop new models. The process for developing new models is outlined elsewhere as the steps required to undertake a modelling project.
Translating existing models involves the following steps:
- Identify existing computational model(s) of relevant youth mental health systems to be redeveloped using the ready4 framework. Processes for identifying models could include:
- A modelling team reviewing some of the models that they have previously implemented using other software; and/or
- A systematic search of published literature and/or model repositories.
-
(Optional - only if a single project plans to redevelop multiple models) Develop a data extraction tool into which data on relevant model features will be collated and categorised.
-
Extract data on relevant model features. In the (highly likely) event that the reporting and documentation of the model being redeveloped lacks important details:
- Contact the original model authors for assistance; and/or
- Seek relevant advice to help determine plausible / appropriate values for missing data.
-
Author module libraries for representing the included model(s).
-
Author labelled open access datasets of model input data (which can be set to private for now).
-
Author analysis and reporting programs designed to replicate the original modelling study / studies.
-
Compare results from original and replication analyses. Ascertain the most plausible explanations for any divergence between results. Where this explanation relates to an error or limitation in the new ready4 modules or analysis programs that have been authored, fix these issues.
-
Complete documentation of model libraries, datasets and analyses.
-
(If not already done) Publish / link to datasets on the ready4 Dataverse and share releases of libraries and programs in the ready4 Zenodo community.
When?
As our current focus in on developing the MVP model, we are not yet actively pursuing this priority. That will change if we are successful in securing more support from funders. In the mean time, if you are a researcher and/or modeller who is interested in leading a project that can help extend ready4, you can contact our project lead for guidance and/or to discuss the potential for collaborations.
1.6 - Priority 6: Integrate ready4 with other open source tools
Why?
Currently all ready4 software is developed using the R language. Although R is powerful, popular and flexible, there are limitations to relying on this toolkit alone. For some tasks, tools written in other languages provide superior performance. Requiring coders to have knowledge of R erects barriers to participation that thus the rate and quality of ready4’s development.
What?
We aim to support and integrate the development and use of tools to implement and extend the ready4 model in multiple languages, with an initial focus on python. The most important initial types of help we need with achieving this goal are advice, funding and code contributions.
How?
This is a longer term program of activity that has yet to be planned. We expect the first step in this process will be convening an advisory group of interested stakeholders to help us identify appropriate actions.
When
We have no active plans to progress this during our current 2023-2025 activity cycle. However, we are open to providing whatever support and guidance we can to researchers and organisations who are interested in leading a project of this nature.
2 - Contribution types
2.1 - Provide advice
What?
We need advice:
-
to help review and update our priority goals and develop, refine and implement strategies for achieving these goals;
-
to help plan and execute modelling projects that produce accountable, reusable and updatable models; and
-
to identify how our existing software and data can be usefully improved.
Who?
We wan advice from our users (coders, modellers and planners), stakeholders (funders, researchers and young people) and other supporters (those with relevant expertise in technical communication, building open source communities, product development, etc).
How?
Advice can be provided by:
-
Joining a volunteer advisory board to help shape the evolution of ready4. We plan on inviting expressions of interest (EOIs) for this type of role later in 2023. If you want to ensure that you are sent details of the EOI invitations, contact the ready4 project lead.
-
Participate in the advisory structures and events of individual modelling projects. The nature of these opportunities will vary by project and the team responsible for implementing each project. For those projects we lead ourselves, we typically promote such EOIs via the Orygen website and associated social media channels.
-
Flag software features, usability and documentation issues. If you have the capacity and willingness to also fix the issues you can approach this using the process for making a code contribution. Otherwise, you can do so by creating an issue on that software projects repository in our GitHub organisation. For example, to create a new issue relating to the ready4 foundation library, use https://github.com/ready4-dev/ready4/issues/new (you will need a GitHub account).
2.2 - Contribute code
What?
Test, improve or extend our software. This is essential to us achieving our following priority goals:
Who?
To make a code contribution, you need to be a coder familiar with R, R Markdown and git. You will also need a GitHub account. For many types of contribution, you will also need to use our framework’s module authoring tools. We have yet to adequately document and refine these tools to make them easier for third parties to use (we plan to do this), so if you are interested in making anything other than a relatively minor code edit, we recommend that you first contact our project lead to discuss your idea.
As a contributor to ready4, you will also be expected to adhere to the
How ?
The process for making a code contribution, broadly conforms to the steps we itemise below, that we have minimally adapted from this template. If you need further help to make a contribution, you can contact the ready4 project lead directly.
-
Find an issue that you are interested in addressing or a feature that you would like to add. Ideally consider how your planned contribution matches our current priorities.
-
Fork the repository associated with the issue from our GitHub organization to your local GitHub organization. This means that you will have a copy of the repository under your-GitHub-username/repository-name.
-
Clone the repository to your local machine using:
git clone https://github.com/github-username/repository-name.git
- Create a new branch for your fix using:
git checkout -b branch-name-here
-
Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
-
To add the file contents of the changed files to the “snapshot” git uses to manage the state of the project, also known as the index, use:
git add insert-paths-of-changed-files-here
- To store the contents of the index with a descriptive message, use:
git commit -m "Insert a short message of the changes made here"
- Push the changes to the remote repository using:
git push origin branch-name-here
-
Submit a pull request to the upstream repository.
-
Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so “Added more log outputting to resolve #4352”.
-
In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It’s OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
-
Wait for the pull request to be reviewed by a maintainer.
-
Make changes to the pull request if the reviewing maintainer recommends them.
-
Celebrate your success after your pull request is merged!
2.3 - Fund projects
What?
Provide cash or in-kind resources to support us to achieve any or all of our priority goals:
Who?
We are seeking support from multiple different types of funder. At this early stage of our development we would expect that the most impactful way of supporting ready4’s development will be to award funding for that purpose directly to ready4’s two institutional sponsors: Orygen and Monash University. Other ways to support ready4 will be to fund ready4 modelling projects led by other research institutions and which may or may not be formally affiliated with us.
How?
The two main categories of funding we seek are:
-
Core infrastructure. Essential to the success of priorities 1-2 and 3-6 above is adequately resourced support infrastructure. Financial support we receive for this purpose will primarily be dedicated to recruit a skilled team of data scientists (coders), modellers, technical documentation / training developers, community builders and stakeholder managers. Other important resource requirements relate to licensing appropriate technical solutions (hosting, security, workflow optimisation, etc) to support the ready4 community.
-
Modelling projects To advance priorities 3 and 5 above, teams with high quality plans to undertake modelling projects with ready4 need to be backed with financing. Typically funding provided to these types of projects will be primarily spent on employing modellers, data-scientists and other researchers and on supporting processes to meaningfully engage young-people, planners and other stakeholders.
If you would like to invite a funding proposal from ready4, contact the project lead. You can also simply make a direct donation to Orygen (please remember to specify www.ready4-dev.com as the reference for the project you would like to support!).
2.4 - Undertake projects
What?
A ready4 modelling project undertakes novel analysis of youth mental health topics by using, enhancing and/or authoring model modules, datasets and executables. Each ready4 modelling project has its own unique funder(s), governance, objectives and team. The links between modelling projects are in the form of a common framework and membership of a collaborative community.
Undertaking modelling projects will help us achieve our following priority goals:
3. Applying ready4.
5. Extending ready4.
Who?
Modelling projects should typically be led by a researcher (who may or may not be a modeller) or planner. The core project team will always include modelling expertise and, should authorship of new modules (or extensions to existing modules) be required, will also need to include coders. Advisory structures to engage young people and planners are also recommended.
How?
There are three main steps in implementing a ready4 modelling project.
Step 1: Develop model
Each project’s computational model is constructed by adopting one or more of the following strategies:
- selecting a subset of existing ready4 modules and using them in unmodified form;
- selecting a subset of existing ready4 modules and contributing code edits to these modules to add desired functionality;
- selecting a subset of existing ready4 modules and using them as templates from which to author new inheriting modules (which can be code contributions to an existing module library or distributed as part of a new library); and/or
- authoring new ready4 modules (most likely to be distributed in new code libraries).
As part of the validation and verification process for all new and derived modules, tests should be defined, bundled as part of the relevant module libraries and rerun every time these libraries are edited.
Step 2: Add data
By data we typically mean digitally stored information, principally relating to model parameter values, that can be added to the ready4 computational model to tailor it to a specific decision context (e.g. a particular population / jurisdiction / service / intervention) and set of underpinning beliefs (e.g. preferred evidence sources). Data for a ready4 modelling project will be from one or both of the following options:
- finding and using existing open access data from other ready4 projects;
- supplying new project specific data, appropriately describing these data and (for non-confidential records) sharing these data publicly.
Step 3: Run analyses
ready4 project analyses apply algorithms contained in ready4 modules to supplied data to generate insight and can be implemented by:
- adapting existing replication programs;
- authoring new analysis programs; and / or
- developing a user-interface to allow non-technical users to run custom analyses.
When reporting analyses, using a reporting template can be useful.
2.5 - Support the ready4 community
What?
Help other members of the ready4 community to apply ready4 by authoring documentation, developing training and posting answers in online help. This support is essential for us to advance the following project goals:
4. Growing a ready4 community.
5. Extending ready4.
Who?
Any community member (user or other stakeholders) can help us to improve the accessibility, clarity and usefulness of our documentation. Coders and modellers are particularly welcome to contribute support that leverages their technical expertise.
How?
The types of support that we welcome contributions on include:
- Improving the documentation contained on this website. To do this, you will need a GitHub account. Once you have that, you can:
-
flag a general issue and suggest improvements by clicking on the “Create documentation issue” link or visiting https://github.com/ready4-dev/ready4web/labels/documentation ; and/or
-
suggest edits to a specific page by clicking on the “Edit this page” link.
- Improve the documentation for specific library, executable or dataset:
-
for software documentation edits, you can use the same workflow as that for making a code contribution; and
-
for improvements to dataset documentation, we have yet to set up a streamlined workflow for this process, so for moment please contact the ready4 project lead directly if you ar interested in making this type of contribution.
- Contributing to developing other training and support resources (e.g. answering questions in online help, video turorials, etc). We believe that this type of content is most likely to become relevant when we have made more progress in developing the ready4 community. But again, if you are interested in this area, please contact the project lead to discuss.
3 - Contributor covenant (code of conduct)
Our pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Our standards
Examples of behavior that contributes to a positive environment for our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others’ private information, such as a physical or email address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
Enforcement responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Enforcement guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
1. Correction
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
2. Warning
Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
3. Temporary ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
4. Permanent ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.