THE AWARD
CATEGORIES
REGISTRATION
SUBMIT YOUR WORK
ENTRY INSTRUCTIONS
TERMS & CONDITIONS
PUBLICATIONS
DATES & FEES
METHODOLOGY
CONTACT
WINNERS
PRESS ROOM
GET INVOLVED
DESIGN PRIZE
DESIGN STORE
 
THE AWARD | JURY | CATEGORIES | REGISTRATION | PRESS | WINNERS | PUBLICATIONS | ENTRY INSTRUCTIONS

Design Competition Voting System Theory

Home > Theory > Voting System
This page describes the general theory behind the voting system for A' Design Award and Competitions.

Design Competition Voting System - How to derive real criteria weights for a design competition using the preference ordering votes

Abstract: In order to have intertemporally comparable results for design competitions, we should focus on criteria based voting with weights assigned to each criteria. However, till now, predetermined weights for criteria have been used and these predetermined weights are not based on solid backgrounds; they were selected with simple reasoning. On the other hand, by using reserve-engineering of preference orderings, we could derive the real criteria weight that jurors use when reflecting in action during the voting process of a design competition. To do so, we run a true design competition where the jury is asked to vote twice, first being the preference order of designs and second being the criteria based voting. We aim to gather following information: 1. What are the real criteria weights that are used by jury members when they are voting designs in a preference order. 2. What are other possible criteria that should be considered when voting for designs. 3. How can we use the real criteria weights to improve voting processes of design competitions. Finally, we would like to run a survey to collect further information about the fundamental rules that govern the voting mechanism in a design competition. This article explains how we could design an established, fair and founded voting system for a design competition.

Targets
1. There are no decision-making and behavioral biases
2. Entries are ranked according to an established mechanism
3. The ranking is intertemporally comparable and consistent
4. Can give solid feedbacks to the participants
5. Ranking is jury-independent.
6. Coalition vulnerability is low.

Before, we can develop such a system, we should see what is actually happening in a real-life case. To do so, we will discuss about a hypothetical design competition where there are 4 entries, the number of submissions is selected as 4 for ease of demonstration.

Because there are only 4 submissions, the ranking board should look like the following:

First Place (1st) is the obvious winner, where second (2nd), third (3rd) and fourth (4th) places are other possibilities, such as awarded, mention, runner-up or not-awarded etc.

Assume that we have 4 different submissions or designs (D1,D2,D3,D4) to be voted. The number of possible ordering is 4! = 4 x 3 x 2 x 1= 24. The general rule is n! (n-factorial)(In mathematics, the factorial of a positive integer n, denoted by n!, is the product of all positive integers less than or equal to n).

Assume that we have 4 different jury members (J1,J2,J3,J4), and also assume that each jury member has a different preference order than other; if P is the Preference Order Mentality of a jury member, we can state the following: ∀P(Jx)≠P(Jy), in other words, P(J1)≠P(J2)≠P(J3)≠P(J4).

Now that we have both the submissions and the jury, we could run a jury session to define the winners. In most cases, entries are ranked so that there is a winner, a second place, a third place, a fourth place and so on so forth. But this might not be the case, in some design competitions only the winners are ranked, and the rest are discarded.

The most common way to order entries in a design competition is by collective preference ordering, where simultaneously entries are ranked all together by up or down votes with many jury members acting together at once through discussions, physical displacement, ordering of designs and constant dialogue, in this case within short amounts of time many entries are ranked. This is a hive-mind structural process where individual jury members lose their distinct personalities and act as a community, by forming the Community Jury (J-C). This is an efficient way to rank designs but comes up with its' unique issues, especially;

Issues with Collective Preference Ordering (Community Jury)
1. Ranking attention is asymmetric and vague; on the lower-end less attention is given to losers but on the high-end more attention is given to winners, the ranking is clear for the first, second, third, but not so clear for the last, the one before last etc.
2. There is a large bandwagon effect (bandwagon effect means that people often do and believe things merely because many other people do and believe the same things; if most of the jury members like an entry, a single jury member would also feel the same way just because others do).
4. A high degree of confirmity arises, choices and orderings become group-consistent, especially individual members might be afraid to state their contra-decisions based on the fact that it might lead to a type of negative-evaluation in the group.
3. There is potential thread of hierarchic community ranks effect on votes. For example, Alpha voters, that can effect the votes of others such as Betas, and Omegas can significantly change the ordering. The Alpha voter is the jury member with the highest hierarchic rank or experience. Betas are second and Omegas are the last.
4. The only feedback you get (in addition to any written comments) is your current rank, in most cases you are either ranked or not-ranked; on most occasions, only the winners are ranked and non-winners are not assigned any rank, thus the amount of feedback decreases even further.
5. Ranking is highly-jury dependent, when jury changes, the ranking is totally changed.
6. In the end, either the ranking criteria is not stated in a clear fashion, or the ranking criteri is not followed at all, this can happen naturally because at this type of voting, the jury members are totally reflecting in action and interaction.
7. It is vulnerable to coalition strategies at a high degree; a small portion of jury members can effect the outcome in a great way.
8. Requires physical meetings to be practical and efficient, all jury members must be present at the same location at the same time, as the size of jury increase, it becomes evidently more difficult to organize such juries.
9. However, it might be desirable in cases where, there is an experienced jury member with alpha properties, that we wish to give more weight to her preferences.

We can improve this voting system for design competitions in such a way that; we could instead ask jury members to vote independently of each other. This is actually a pretty common way of ranking that is also used in international sports competitions. But what we suggest at this step differs; instead of directly giving scores for each design, lets stay more focused on the ordering, so that we could understand how the scores are given.

At this step, each jury member orders the design submissions on their own by casting preference votes. (In some competitions, jury have specific amount of "votes" to distribute between designs)

Jury Member 1st 2nd 3rd 4th

How do we define the winner in this case? If we consider all the Jury Members as equals, we could use an established strategy: Multiple Winner Borda Count. We start by assinging a score for a rank; lets say that 1st is 4 points, 2nd is 3 points, 3rd is 3 points and 4th is 1 point. Then for each of the submissions, we sum these points.

    Vote J1 Vote J2 Vote J3 Vote J4 Totals
Ranking Designs
2 4 1 2 4 11
3 3 4 1 1 9
1 2 3 4 3 12
4 1 2 3 2 8

We see that for this voting session, the jury has selected the D3 as the winner. But, we still have some issues to solve; all the jury members are indeed not equals when we consider their skill sets; they have different backgrounds that give them better judging skill on different aspects of a design. Jury members coming from the industry are more likely to now if a design is easy to manufacture, and for instance, jury members coming from the academic sphere are more likely to now if a design has already been done before or not etc.

How can we reflect the different skill-sets of jury-members on voting? The answer is trivial; we can weight the votes of different jury members, for each criteria or checkpoints. Criteria or checkpoints are specificly focused information that we consider when evaluating a design project, for example the ergonomics aspect of a design could be a criteria, ease of manufacturing, innovative use of material could also be different criterias. Although the answer is trivial, application requires further study.

So the question now becomes, what criteria can we use when evaluating projects; for each different product category, the criteria should also be different; for a graphic design we cannot talk about ergonomics for example. We could have many classifications; for instance just for industrial design, we might have 32 categories if we had used locarno classificiation. However, a better approach could be dependend, instead of categories, criterias that are present in any type of design.

Criteria What to seek for? Score Weight
Aesthetics Is the design aesthetically appealing? Does the design have a refined form and shape, texture, finishing or details, colors and color options that are suitable to the current social context? Is the design trendy or timeless, does the design create an emotional impact? A Wa
Identity Is there a clear coordinated identity for the product, throughout packaging, users manual, marketing and branding communication? Does the design has a clear target segment or relevance? B Wb
Innovation Is the design innovative? Is it different from all the previous designs in an intelligent fashion. Are there any elements of the project that create a unique added value for the design? Is the project highly-developed with in-detail thinking? C Wc
Functionality Does the design serve a utilitarian purpose; capable of serving the purpose for which it was designed? Does the design, serve its purpose in a great way? D Wd
Ergonomics Would the subject that would interact with the design, could interact with the design in a healthy, comfortable, safe, easy, friendly and efficient manner? E We
Sustainability Does the design focus on reparability, durability, impact on nature, recyclability, reusability? Is the design resource friendly and makes efficent use of materials and technologies? F Wf
Economics Are economies of scale present? Is the design highly marketable? Does the design have a unique selling proposition? Does the design have a cost advantage, and is this design easy to produce, in an efficent matter? G Wg
Graphic Presentation Does the graphic presentation has a clear communication of the design project with usage of graphics? There should not be any watermarks on the design that might lead to the identification of the participants. Designs should be clearly identifyable and preferably stand alone. H Wh
Textual Description Does the description explain the design in a clear fashion giving fundamential information about engineering, aesthetics, function, the core idea, challanges, research, the interaction and operation about the design project. I Wi
Primality How are the fundemential qualities of this design, based on the specific qualities that you would check particularly for the category? Z Wz

Given these criteria, we could vote on any type of design project, we will have a total score for each design using the following formula:

Total Score (TS) = A x Wa + B x Wb + C x Wc + D x Wd + E x We + F x Wf + G x Wg + H x Wh + I x Wi + Z x Wz

Designs Total Score from a Single Jury Member
9 x Wa + 8 x Wb + 7 x Wc + 7 x Wd + 8 x We + 8 x Wf + 7 x Wg + 8 x Wh + 9 x Wi + 9 x Wz
8 x Wa + 7 x Wb + 6 x Wc + 7 x Wd + 8 x We + 9 x Wf + 7 x Wg + 6 x Wh + 5 x Wi + 7 x Wz
7 x Wa + 6 x Wb + 5 x Wc + 6 x Wd + 5 x We + 4 x Wf + 6 x Wg + 5 x Wh + 4 x Wi + 5 x Wz
5 x Wa + 4 x Wb + 2 x Wc + 4 x Wd + 3 x We + 1 x Wf + 4 x Wg + 2 x Wh + 3 x Wi + 2 x Wz

 

The following is an exaple voting example, from only a single jury member, if all the critera have equal weights.

  Criterias    
Designs A B C D E F G H I Z Total Score Ranking
9 8 7 7 8 8 7 8 9 9 80 1
8 7 6 7 8 9 7 6 5 7 70 2
7 6 5 6 5 4 6 5 4 5 53 3
5 4 2 4 3 1 4 2 3 2 30 4

Now lets demonstrate the weights as well: The Criteria Score (CS) is calculated as, criteria point times weight of the criteria. (Y x Wy).

Designs A Wa B Wb C Wc D Wd E We F Wf G Wg H Wh I Wi Z Wz Total Score Ranking
9 10 8 10 7 10 7 10 8 10 8 10 7 10 8 10 9 10 9 10 800 1
8 10 7 10 6 10 7 10 8 10 9 10 7 10 6 10 5 10 7 10 700 2
7 10 6 10 5 10 6 10 5 10 4 10 6 10 5 10 4 10 5 10 530 3
5 10 4 10 2 10 4 10 3 10 1 10 4 10 2 10 3 10 2 10 300 4

If we were to repeat it for each jury member, we would than have a table such as the following:

  Total Scores by Different
Jury Members
   
Designs J1 J2 J3 J4 Avarage Total Score Ranking
800 900 670 530 725 1
700 750 520 320 573 2
530 570 650 480 557 3
300 450 710 280 435 4

Ranking will be made by ascending order, design with the highest score (TS), becomes the 1st. But now, we have another question that is of importance: What are the weights of these criteria on designs? How can we calculate a correct, real score, if we do not know the weights? In the above example, we had given equal weights to each criteria.

Designs A Wa B Wb C Wc D Wd E We F Wf G Wg H Wh I Wi Z Wz Total Score Ranking
9 Wa 8 Wb 7 Wc 7 Wd 8 We 8 Wf 7 Wg 8 Wh 9 Wi 9 Wz 800 1
8 Wa 7 Wb 6 Wc 7 Wd 8 We 9 Wf 7 Wg 6 Wh 5 Wi 7 Wz 700 2
7 Wa 6 Wb 5 Wc 6 Wd 5 We 4 Wf 6 Wg 5 Wh 4 Wi 5 Wz 530 3
5 Wa 4 Wb 2 Wc 4 Wd 3 We 1 Wf 4 Wg 2 Wh 3 Wi 2 Wz 300 4

This is intriguing, because normally the weights for criteria are given at the begining of a design competition, pre-determined by some experienced jury members, consultants, or the organizer. However, the truth is that, these pre-determined, given values for criteria weight rarely reflect the true preferences of jury members. The ranking by preference ordering, therefore is usally different from the ranking of criteria voting. The aim is to find the correct weights for the criteria such that the resulting rankings of both systems will give us very similar results; ranking of preference ordering should be similar to ranking by criteria voting.

Preference
Ordering
Rank
Preference
Ordering
Score
Preference
Ordering
Designs
Criteria
Voting
Rank
Criteria
Voting
Score
Criteria
Voting
Designs
1 4 1 800
2 3 2 700
3 2 3 530
4 1 4 300

There is indeed a way to find these weights by reverse-engineering of preferences. To do so, we need the jury members to evaluate each design twice using both of the methods. Afterwards, we can now run a regression analysis to find out what the weights are. This could be computed, or could also be brute-force calculated by trying millions of possibilities within seconds using a computer algorithm. We aim the preference ordering ranks for each design to be similar with the criteria voting rankings.

Preference
Ordering
Rank
Preference
Ordering
Score
Preference
Ordering
Designs
Criteria
Voting Rank
Criteria
Voting Score
Criteria
Voting
Designs
1 4 1 ?
2 3 2 ?
3 2 3 ?
4 1 4 ?

The analysis could be done for each jury member, to see their personal preferences, or could be done globally for understanding the community preferences of all jury members to see the general results. We need to consider one more thing; we cannot let the total submissions effect the preference score in a great way. Instead of using the standard formula for preference score; Preference Score (PS) = Total Submissions (N) +1 - Preference Order of Design. We can use a modified formula, which will have normalized score even if the number of submissions vary. To do so, we come up the following formula:

Modified Preference Score (MPS) = PS / Max(PS) * Max(TS).

Preference
Ordering
Rank
Preference
Ordering
Score
Modified
Preference
Ordering
Score
Criteria
Voting
Designs
1 4 100
2 3 75
3 2 50
4 1 25

Above, if the maximum total score is 100, then we have normalized, modified preference ordering scores for each of the designs.

Now we try to find the best weights such that total scores from criteria voting would be equal or similar to the modified preference ordering scores by preference voting.

Total Scores from Criteria Voting   Modified
Preference
Ordering
Score
9 x Wa + 8 x Wb + 7 x Wc + 7 x Wd + 8 x We + 8 x Wf + 7 x Wg + 8 x Wh + 9 x Wi + 9 x Wz = 100
8 x Wa + 7 x Wb + 6 x Wc + 7 x Wd + 8 x We + 9 x Wf + 7 x Wg + 6 x Wh + 5 x Wi + 7 x Wz = 75
7 x Wa + 6 x Wb + 5 x Wc + 6 x Wd + 5 x We + 4 x Wf + 6 x Wg + 5 x Wh + 4 x Wi + 5 x Wz = 50
5 x Wa + 4 x Wb + 2 x Wc + 4 x Wd + 3 x We + 1 x Wf + 4 x Wg + 2 x Wh + 3 x Wi + 2 x Wz = 25

Of course the above equaition is unsolveable; there are not enough submissions to determine the weights; for the above equation to be solveable, we need at least 10 submissions, furthermore another issue is that with only the minimum number of submissions, the numbers would again not make sense, as the modified preference ordering score is actually somehow biased. Instead, with a statistically significant number of votes our aim is to find consistency with the following way:

Total Scores from Criteria Voting   Modified
Preference
Ordering
Score
9 x Wa + 8 x Wb + 7 x Wc + 7 x Wd + 8 x We + 8 x Wf + 7 x Wg + 8 x Wh + 9 x Wi + 9 x Wz = S1
8 x Wa + 7 x Wb + 6 x Wc + 7 x Wd + 8 x We + 9 x Wf + 7 x Wg + 6 x Wh + 5 x Wi + 7 x Wz = S2
7 x Wa + 6 x Wb + 5 x Wc + 6 x Wd + 5 x We + 4 x Wf + 6 x Wg + 5 x Wh + 4 x Wi + 5 x Wz = S3
5 x Wa + 4 x Wb + 2 x Wc + 4 x Wd + 3 x We + 1 x Wf + 4 x Wg + 2 x Wh + 3 x Wi + 2 x Wz = S4

Where, S1 > S2 > S3 > S4, and we could use the modified preference ordering score as a reference. Given all the above information, our test is as follows:

Jury votes twice by 1. Preference Ordering, and 2. Criteria Voting., we then try to have consistent criteria weights that would provide similar results with the preference orderings.

You might have asked why do we try to match the criteria votes to that of preference ordering values, the reason is to have intertemporally comparable results in the end; if we do run preference ordering every competition, the criteria weights would have been different, and the results in different runs could not be compared.

 

 

 

design award logo

BENEFITS
THE DESIGN PRIZE
WINNERS SERVICES
PR CAMPAIGN
PRESS RELEASE
MEDIA CAMPAIGNS
AWARD TROPHY
AWARD CERTIFICATE
AWARD WINNER LOGO
PRIME DESIGN MARK
BUY & SELL DESIGN
DESIGN BUSINESS NETWORK
AWARD SUPPLEMENT

METHODOLOGY
DESIGN AWARD JURY
PRELIMINARY SCORE
VOTING SYSTEM
EVALUATION CRITERIA
METHODOLOGY
BENEFITS FOR WINNERS
PRIVACY POLICY
ELIGIBILITY
FEEDBACK
WINNERS' MANUAL
PROOF OF CREATION
WINNER KIT CONTENTS
FAIR JUDGING
AWARD YEARBOOK
AWARD GALA NIGHT
AWARD EXHIBITION

MAKING AN ENTRY
ENTRY INSTRUCTIONS
REGISTRATION
ALL CATEGORIES

FEES & DATES
FURTHER FEES POLICY
MAKING A PAYMENT
PAYMENT METHODS
DATES & FEES

TRENDS & REPORTS
DESIGN TRENDS
DESIGNER REPORTS
DESIGNER PROFILES
DESIGN INTERVIEWS

ABOUT
THE AWARD
AWARD IN NUMBERS
HOMEPAGE
AWARD WINNING DESIGNS
DESIGNER OF THE YEAR
MUSEUM OF DESIGN
PRIME CLUBS
SITEMAP
RESOURCE

RANKINGS
DESIGNER RANKINGS
WORLD DESIGN RANKINGS
DESIGN CLASSIFICATIONS
POPULAR DESIGNERS

CORPORATE
GET INVOLVED
SPONSOR AN AWARD
BENEFITS FOR SPONSORS

PRESS
DOWNLOADS
PRESS-KITS
PRESS PORTAL
LIST OF WINNERS
PUBLICATIONS
RANKINGS
CALL FOR ENTRIES
RESULTS ANNOUNCEMENT

CONTACT US
CONTACT US
GET SUPPORT

Follow us : Twitter Twitter | Twitter Facebook | Twitter Google+.
Share |