Dev Blog Week 4
Script Writing
During our quarter open house, most of our guests think our demo is very easy to understand but a little tedious as same questions keep looping. Therefore, after quarter, we explored a different way in design and script to address this feeling. The original interaction, as shown in the chart below, are formed by a set of tree-based decisions. Instead of using this kind of linear Q&A style, we tried listing a lot of options and let user give command directly. The new interaction is a flat system that removed the tedious feeling of the looping question. The architecture are shown in the figure below. The play test results of this flat system are somehow expected. Players can only hit few options we give in the script.


Next week, we gonna have more design meeting and combine two styles(linear and flat) together to make players feel the freedom while understanding their goals.For “Create your Sim” part, we expended three questions to five in order to introduce more traits. One or two questions will determine a trait. For example, ”The better food is Costco Hotdog or Michelin star dish?“ If player choose Costco hotdog, we will assume his/her sim prefer quick and easy meal. In the meantime, if players feel uncomfortable about this question, they can skip to next one in the database. We prepared around six questions for each traits in the database, totally around 30 questions from Sims 4, so each time they create their sim, they will be asked differently. (edited)
Weekly Tech Blog (part):
After this week’s client meeting and the discussion, we notice that our final product will have many audio clips, and they are defined by several attributes. For example, the Sims have different genders and emotions, so there should be different sound effects for one same action to match them. This is something complicated for skill flow builder to do. Because you can’t use “if” to modify the output. So you need to have as many branches as possible audio clips. Such a structure is not convenient for future maintenance.
So I started to transplant the demo made by skill flow builder to Python. These two platforms have different logic structures. Skill flow builder is made of scenes. In one scene, Alexa first says something, and according to what the players respond, it goes to the next specific scene. While in Python API, there are handlers rather than scenes. In one handler, Alexa first recognizes whether the players’ input is acceptable for this handler. If so, it will run the code of that handler and finally let Alexa say something then wait for the next response. You can’t assign which next handler to go; it is decided by the voice command input.
After this week’s client meeting and the discussion, we notice that our final product will have many audio clips, and they are defined by several attributes. For example, the Sims have different genders and emotions, so there should be different sound effects for one same action to match them. This is something complicated for skill flow builder to do. Because you can’t use “if” to modify the output. So you need to have as many branches as possible audio clips. Such a structure is not convenient for future maintenance.
So I started to transplant the demo made by skill flow builder to Python. These two platforms have different logic structures. Skill flow builder is made of scenes. In one scene, Alexa first says something, and according to what the players respond, it goes to the next specific scene. While in Python API, there are handlers rather than scenes. In one handler, Alexa first recognizes whether the players’ input is acceptable for this handler. If so, it will run the code of that handler and finally let Alexa say something then wait for the next response. You can’t assign which next handler to go; it is decided by the voice command input.
Script Writing
During our quarter open house, most of our guests think our demo is very easy to understand but a little tedious as same questions keep looping. Therefore, after quarter, we explored a different way in design and script to address this feeling. Instead of using linear Q&A style, we tried listing a lot of options and let user give command directly. The play test results of this flat system are somehow expected. Players can only hit few options we give in the script.
Next week, we gonna have more design meeting and combine two styles(linear and flat) together to make players feel the freedom while understanding their goals.For “Create your Sim” part, we expended three questions to five in order to introduce more traits. One or two questions will determine a trait. For example, ”The better food is Costco Hotdog or Michelin star dish?“ If player choose Costco hotdog, we will assume his/her sim prefer quick and easy meal. In the meantime, if players feel uncomfortable about this question, they can skip to next one in the database. We prepared around six questions for each traits in the database, totally around 30 questions from Sims 4, so each time they create their sim, they will be asked differently. (edited)
Weekly Tech Blog (part):
After this week’s client meeting and the discussion, we notice that our final product will have many audio clips, and they are defined by several attributes. For example, the Sims have different genders and emotions, so there should be different sound effects for one same action to match them. This is something complicated for skill flow builder to do. Because you can’t use “if” to modify the output. So you need to have as many branches as possible audio clips. Such a structure is not convenient for future maintenance.
So I started to transplant the demo made by skill flow builder to Python. These two platforms have different logic structures. Skill flow builder is made of scenes. In one scene, Alexa first says something, and according to what the players respond, it goes to the next specific scene. While in Python API, there are handlers rather than scenes. In one handler, Alexa first recognizes whether the players’ input is acceptable for this handler. If so, it will run the code of that handler and finally let Alexa say something then wait for the next response. You can’t assign which next handler to go; it is decided by the voice command input.
After this week’s client meeting and the discussion, we notice that our final product will have many audio clips, and they are defined by several attributes. For example, the Sims have different genders and emotions, so there should be different sound effects for one same action to match them. This is something complicated for skill flow builder to do. Because you can’t use “if” to modify the output. So you need to have as many branches as possible audio clips. Such a structure is not convenient for future maintenance.
So I started to transplant the demo made by skill flow builder to Python. These two platforms have different logic structures. Skill flow builder is made of scenes. In one scene, Alexa first says something, and according to what the players respond, it goes to the next specific scene. While in Python API, there are handlers rather than scenes. In one handler, Alexa first recognizes whether the players’ input is acceptable for this handler. If so, it will run the code of that handler and finally let Alexa say something then wait for the next response. You can’t assign which next handler to go; it is decided by the voice command input.