Such systems have three principal functions:
- Inform participants about other participants, to help them determine if a particular participant is trustworthy.
- Create an incentive for good behavior. If participants know that they will be rated and that the rating is publicly available, they are more likely to provide accurate information (e.g., product listings), good service, and so on.
- Provide a selection effect. If participants know that good behavior will be noticed and rewarded, they are more likely to join the system. Similarly, would-be malicious participants will know that any incompetence or deliberate disruption will be made public — a deterrent to misbehavior.
Such systems typically shows one of the two following main structure models (a compromise between these two extremes is also possible):
- Centralized model: a central authority collects reputation scores (from other entities and using other sources such as its own observation), typically processes them to form an aggregated reputation score for a given entity, and then redistributes this reputation score for use by other entities. Online trading and market communities use this model.
- Decentralized model: the entities participating in the community share the reputation information, without the need for a central repository. This model is more suitable for networks that are decentralized by nature, such as peer-to-peer and autonomic systems. It also allows peers to assign different trust values to different sources of reputation scores.
There are also several types of reputation-based systems, including:
|“||[R]eputation systems rely on voluntary investment of time and energy to provide ratings and may therefore be gamed or simply skewed toward participants with strong views and available time to participate, providing potentially inaccurate or at least unrepresentative data.||”|