Playfish dating

The Sims Social - Wikipedia

playfish dating

The latest articles about playfish from Mashable, the media and tech The company made the announcement at E3 on Monday, but didn't offer a launch date. Developer(s) · Playfish · Publisher(s) · Electronic Arts. Series, The Sims · Edit this on Wikidata · Platform(s) · Facebook Platform. Release, August 9, Genre(s) · Dating sim, Life simulation. The Sims Social was a Facebook addition to the Sims series of video games. It was announced. To date, the robust roster of Playfish social games have been installed more than million times by millions of players around the world on.

When the player upgraded the level of a trait, the trait became more prominent in the Sim's lifestyle. The most visible example of this was in the Ninja trait. When a Sim had the level one Ninja trait, the Sim walked faster. However, when a Sim had the level five Ninja trait, they could teleport from place to place. In the Insane trait, Sims could reduce the need of being social and could talk to plants without visiting people in previous versions, talking to plants also eliminated the energy needed to fulfill Social.

For example, a level 4 Insane trait let the sim talk to themselves to fulfil Fun and Social need without eliminating energy. Certain career traits, like Art Virtuoso, provided the benefit of earning more Simoleons while using the respective skill object. However, these traits could only be unlocked by reaching specific career levels. However, unlike other games in the series, Sims could not die.

There were six needs: Inspired Sims earned more Simoleons when performing skill tasks. Sims with a bad mood would not follow the directions of the player. Instead, they fulfilled their own needs by autonomy. All mood meters cycled from deep green good to lighter green, yellow, orange, red, and finally gray. All needs could be improved from the Sim's home.

Another feature located next to the needs was the fun meter. This could variate largely depending on the Sim's personality.

Sims took care of themselves using autonomy. If left to their own devices, Sims performed actions that helped out the lowest meter, provided they had an appropriate object nearby. This could not go on extended periods of time, as eventually the game paused and told the player "Your Sim Needs You!

Currency[ edit ] The Sims Social had four currencies: These currencies were used to purchase items in the game. Simoleons were the most basic currency and could be most readily earned by performing almost any non-autonomous task.

SimCash could be obtained by purchasing them with real-world currencies; however, an update to the game permitted users to earn up to 10 SimCash as a reward for playing the game on 5 consecutive days. SimCash allowed the player to purchase special and limited edition objects.

When the player began the game, they received 40 free SimCash. These could be used to purchase objects that were not available using Simoleons. They were the only currency that could not be bought through SimCash but rather earned by completing various quests or levelling up on skill objects.

Social Interaction[ edit ] Unlike its predecessors, The Sims Social used the socializing features of Facebook to allow players to send and receive gifts in order to finish certain quests or objects. For example, when a Sim levelled up a skill, they needed certain items to unlock the next level.

Most of these items were obtained by sending requests to other friends or by interacting with friends' Sims. When the player was unable to obtain objects from friends, the only other option was to skip the task using SimCash. Furthermore, certain items, such as double beds and couches, had a hammer icon in the right corner, denoting that 'some assembly is required'. To assemble these items, a player needed certain items that usually could only be obtained by sending requests to friends.

Traditional games are immersive experiences with great sound and graphics. Playfish tries to design games for social interaction, where you get a benefit to the game by inviting friends. You could play Restaurant City on your own, but it's more fun to invite your friends to a be a cook in your restaurant. You get incrementally more fun by adding friends into a game.

The desire for users to involve friends in order to have more fun is what drives greater distribution and growth. Micro-transaction based revenue model. Micro-transactions are typically high volume, low-value transactions. To pay for the service Playfish makes money by players purchasing in-game virtual items and services. Playfish also makes money from ads and product placements. The cloud is almost the perfect answer to a large number of forces we covered in the previous section. I won't bore you by talking about all the wonders of the cloudbut it's easy to see how elasticity helps solve many of their variable demand problems.

If a game becomes popular they can spin up more resources to handle the load. No procurement processes necessary. And if demand falls they can simply give the instances back. You may think with pay-to-play model the cloud is too expensive, we'll talk more about this in the cloud section, but they don't see it that way.

They are far more concerned about the opportunity cost of not being able to develop new games and improve existing games in a rapidly evolving market. They are very big on SOA.

It's the organizing principle of their architecture and how they structure their teams. Each game is considered a separate service and they are released independently of each other.

Internally software organized into components that offer an API and these components are separately managed and scalable. Playfish is a write heavy service. To deal with writes Playfish went to a sharded architecture because it's the only real way to scale writes. Multiple records are not stored for a user. Writing on the server is asynchronous from game play.

The user does not have to wait for writes to complete on the server to continue playing the game. They try to hide latency from the user as much as possible. In a MMO each move has to be communicated to all the users and latency is key. Not so with Playfish games. By using a smart Flash client Playfish is able to take advantage of the higher processing power on client machines.

As they add more users they also add more compute capacity.

playfish dating

They have to get right balance of what is on the server and client side. The appropriate mix varies by game. The smart client caches to saves on reads, but it also allows the game to be played independently on the client, without talking to the servers.

Data driven game improvement. Playfish collects an enormous amount of data on game play that they use to continually improve existing games and help decide what games to invent next. They are using Amazon's Elastic Map Reduce as their analytics platform. A common thread through all of Playfish's thinking is the relentless need for agility, to be able to respond quickly, easily, and efficiently to every situation. Agility is revealed in their choice of the cloud, organizing around services, fast release cycles, keeping teams small and empowered, and continually improving game design through data mining and customer feedback.

Basic Game Architecture Games run in Flash clients. The clients send requests, in the form of a service level API, to HAProxy servers which load balance requests across Jetty application servers that run in the Amazon cloud. All back-end applications are written in Java and are organized using a services oriented architecture.

The Jetty servers are all stateless, which simplifies deployments, upgrades, and improves availability. MySQL is used as the data tier. Playfish was an early adopter of Amazon's cloud so they were unable to make use of later Amazon developments like load balancing. If they had to start from scratch they would probably go that direction. Changes are pushed to the server asynchronously. Rather than a user clicking a button and that action sent to the server to see what happened, the system is designed so the client is empowered to decide what the action is and the server checks if the action is valid.

Processing is on the client side. If there is a high latency between the user and the service, then the user won't see it because of asynchronous saving and the smart client.

At the end of the day it's what the user sees that matters. The important thing is that when there are glitches in the network or higher latency, that the user has a fun game experience. Playfish's first few games were simple single player games, like Who Has the Biggest Brain, with features like high score and challenges. Not very complicated and not very heavy on the server side. They only had 5 weeks from start to finish on the project.

That included the learning and coding to Facebook APIs, learning and coding to AWS, coding game servers, and setting up production infrastructure. The first three games continued that pattern: That gave them some breathing room to start building out their infrastructure. The first game that changed things was Pet Society in It was the first game that had significant use of virtual items.

Data storage went from storing a few attributes per user, like avatar customization and high score, to storing potentially thousands of items per user for all the virtual items. This put a lot of strain on the system.

EA Welcomes Playfish to the EA Family!

There were some big service problems in the early days as they grew very very fast. Then they put in sharding and the system became much more stable. But first they tried various other techniques. They tried adding more read replicas, like 12 at one point, but that didn't really work. They tried to patch the system as much as best they could.

Then they bit the bullet and put in the sharding. It took 2 weeks from start to roll out. All of performance problems went away on that game. Over time users acquired lots and lots of virtual items, so the volume of rows exploded. Each item was stored in its own row. Even if you split users into shards they found for older users the shards would keep growing and growing, even if the number of users stayed the same. This is one of the original drivers to going to BLOBs.

BLOBs got rid of the multiple rows that caused such performance problems. Over time games started getting more complicated. Pet Society didn't have any simulation elements. Then they launched Restaurant City at the end of which had the first offline simulation element. Your restaurant continued to run and earn money while players are away. This introduced challenges, but adding the extra processing in the cloud was relatively straight forward. The simulation logic was implemented in the client and the server would do things like check for fraud.

A SOA encapsulates data and function together into components that can be deployed independently through a distributed system. The distributed components talk through an API called service contracts.

Services make sure the dependency between all the parts of the system are well known and as loosely coupled as possible. The system can be composed into separate understandable components. Components are deployed and upgraded independently, which gives flexibility and agility and makes it easier to scale development and operations teams. Independent services can be optimized independently of other services.

When a service fails it's easier to degrade gracefully. Each game is considered to be a service. The UI and the backend are a package.

They don't do separate releases of the UI and the backend. Cloud allows Playfish to innovate and try new features and new game with very low friction, which is key in a fast moving market. Moving from their many read replica system to a sharding system took 2 weeks, which couldn't have been done without the flexibility of the cloud. The cloud allows them to concentrate on what makes them special, not building and managing servers.

Because of the cloud operations doesn't have to focus on machine maintenance, they can focus on higher value service, like developing automation across all their different servers and games. Capacity is now seen as a commodity when designing applications.

The ratio of servers to operations people is Such a high ratio is possible because of the cloud infrastructure. Servers fail so this must be planned on from the start. It's not possible to keep adding memory to servers so you may have to scale out earlier than you would like. Key feature of the cloud is flexibility. You can be as agile as you want. You don't need to be surprised when suddenly get a lot of traffic. You don't have to wait for procurement of servers. You never know how quickly a game will take off.

Sometimes you do know, sports games go quickly, but other games may suddenly explodes. In the cloud that doesn't have to be a problem. From the beginning there was never an expectation that they could scale-up in the cloud. Everything is designed to scale by adding more machines.

They can't use all the Amazon services because they had to roll their own. Switching away from their own systems that they understand would be unnecessarily risky. Switching to those services now wouldn't make sense. Playfish is cloud to the core. They take advantage of everything they can in the cloud. More capacity is acquired with ease. They have no internal servers at all. All development machines are in the cloud. The cloud makes it trivial to launch new environments.

To test sharding, for example, is easy, simply copy everything over with a new configuration. This is much harder when running in a datacenter.

playfish dating

The cloud is not more costly than bare metal when you consider everything. Bare metal may look cheaper based on bandwidth and a unit of rack space, but if you look at all the stuff you get, it would be a lot of work. Take the advanced availability features, for example. Change an API call and you get double datacenters. You can't do that in a bare metal situation.

Just consider the staffing costs to setup and maintain. The cloud looks really expensive, but when you get really big capacity breaks start kicking in. The major costs to consider are opportunity costs. This is the single biggest advantage. For example, when they first implemented sharding in Pet Society it 2 week from start to deploy. Users were immediately happy. The speed of their implementation relied on being able fire up a whole load of servers in production and test and migrate data.

If you had a two month lead time you would have had a lot of unhappy users for two months. Playfish runs in multiple availability zones within the same region. Servers are relatively close which reduces latency. They aren't spread out like in MMO systems.

Latency is dealt with at a higher level using asynchronous writes, caching in the client, and caching in a CDN. It can take 3 seconds to perform a game action back on the server, but because it's async, users don't notice. The CDN helps reduce what they do notice, which is asset and game loading.

CloudFront is used to reduce load latency. Playfish is global in sense that have users all over the world. The loading time of the game, which includes the flash code plus game assets, is their most noticeable latency.

CloudFront reduces this latency as it spreads the content out. Users are sharded across multiple database clusters, each with their own master and read replica. There's little benefit for them to have more replicas because they are write heavy. Nearly all the traffic is writes. Writes are harder to scale. Can't cache and more read replicas don't help. In an earlier architecture they had one master with 12 read slaves, which didn't perform well. With sharding they went from one master and 12 read replicas to two masters and two read replicas, which helped with both reads and writes.

Keeping the index in cache ensures a user lookup doesn't have to hit the disk, lookups can be served from RAM. By sharding they can control how many users are in each shard, so they can be sure they won't blow their in-memory index cache and start hitting the disk. They have tried to optimize the size of a user record so that more users will fit in memory. This is why they went to storing BLOBs instead of using data normalized into rows. Now there is one database record per user.

Work is taken out of the database server and moved to the application servers, which are very easily scaled horizontally in the cloud. Most websites use scaling techniques, like memcache, for read caching are that aren't that useful to Playfish. With a Flash client most of what would be cached in memcache is cached in the client. Send it once to the server and it's stored. Sharding is used to get more write performance. They still use MySQL for data storage because they are very comfortable with it's performance statistics under load, etc.

For each shard there's a master and at least on read replicas. For most there's just one read replica, but it depends on the access pattern of the service. Reads are split to the read replicas. For a few places that do have more reads they have more read replicas.

Read replicas are also used to keep data remotely as a backup. Playfish is driven by pure necessity. They built their own key-value store because it had to be done. Why not use NoSQL? They are looking into the options, but at the same time they have a solution that works, that they know how it will behave.

Interested in NoSQL solution for the operations side, for managing multiple databases. It wasn't easy to go into the mode of running NoSQL, but it was a necessity driven by their requirements.

In a scale-out situation you have to go to something like sharding and at the point many SQL features go away. You have to do a lot more work yourself. Now you just can't add an index when you have blobbed and sharded. When going NoSQL you are giving up flexibility of access patterns.

Relational databases are good because you can access the data in anyway you want. For example, since they can't use SQL to sum up fields anymore, they both aggregate on the fly or they use a batch process to aggregate. Backup is to S3. Flash - The Client Client side CPU and resources scale with the number of users so it's sensible to make use as much as possible of the client.

EA Acquires Facebook Game Maker Playfish For Up to $ Million

Push as much processing as possible to the client. Changes are written asynchronously back to the server, which helps hide network latency from the user. Changes are checked on the server side to detect cheating. Flash talks to the Java application servers using a service level API. Bringing processing closer to client gives the user a better experience.

A website brings servers closer to users. Playfish brings the processing even closer, on the client. It offers point-to-point connectivity, low latency, no single-point-of-failure, and asynchronous messaging for event-driven processing and parallelism in multiple backend services.

After services go through a discovery phase to learn where each endpoint is located, messages are transported directly between services. It's a brokerless model. Messages aren't funneled into a centralized services and then redistributed. Messaging is very efficient with this model, latency is reduces because there are no intermediary hops. This approach reduces failure points and latency. Thrift uses a RPC model that looks like a local function call, which makes it harder to deal with errors, timeouts, etc.

YAMI4 doesn't handle the service discovery aspect, they built their own on top that does a discovery phase and then talks directly to the other service. It's more how the Internet works. As a messaging system, messages do not invoke methods on objects. No objects flow over the network either. Objects live in the services. Each service is responsible for activating, passivating, and dispatching operations. Dealing with Multiple Social Networks One of their main challenges is to be able to support so many different and increasingly, different types of games.

The principle of loose coupling is used as much as possible: Team structure is matched to architecture by services being owned by teams. Services keep well-defined interfaces, so that each team can iterate on and deploy their own service without affecting other teams. When interfaces need to change, interfaces are versioned. They try to maintain backward compatibility so other teams do not have to roll out changes. To facilitate all of this, a common set of standards is applied to all services: Common service transport YAMI4 Common operational standards such as how services are configured and how they provide monitoring information.

A combination of common standards and loose coupling allows both development and operations teams to be agile and efficient. Development and Operations Services are released independently from each other. Resources are separated by service. A problem with one service will not impact an unrelated service.