Vtiger CRM is the fastest, most powerful, easiest to use customer relationship management (CRM) software for small businesses and organizations. Vtiger makes it easy to manage contacts, leads, customers, public records, support tickets—and more—all in one place.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service provides secure, reliable, scalable, and low-cost computational resources. It gives developers the tools to build virtually any web-scale application.Amazon EC2 Integrations
Amazon EC2 + SlackGet notified in Slack when a new instance is created in Amazon EC2 Read More...
Amazon EC2 + SlackReceive Slack notifications for new Amazon EC2 scheduled events Read More...
Gmail + Amazon EC2Start, stop or Reboot an instance from a starred Gmail email [REQUIRED : Business Gmail Account] Read More...
If you want to control your Amazon Elastic Compute Cloud (Amazon EC2) from your Gmail then this integration is for you. Once you set it up, whenever you star an email in Gmail, Appy Pie Connect will automatically start, stop or reboot (according to the set schedule) an instance running in your Amazon EC2. With this Gmail- Amazon EC2, you can reduce the costs of running tests and Amazon EC2 instances.
It's easy to connect Vtiger + Amazon EC2 without coding knowledge. Start creating your own business flow.
Triggers when a new lead is created or existing lead is updated.
Triggers when a new Case created.
Triggers when a new Contact created.
Triggers when a new Event created.
Triggers when a new Invoice is created.
Triggers when a new Lead is created.
Triggers when a new Organization created.
Triggers when a new Product created.
Triggers when a new Service created.
Triggers when a new Ticket is created.
Triggers when a new todo is created.
Triggers when a new instance is created.
Triggers when a new event is scheduled for one of your instances.
Creates a new Case.
Create a new Event in Vtiger.
Creates a new Organization/Account/Company.
Creates a new Product in Vtiger.
Creates a new project.
Creates a new Service item in Vtiger.
Creates a new Ticket.
Create a new To do in Vtiger.
Triggers when a new contact is created or existing lead is updated.
Creates or updates lead.
Creates a new product or updates an existing product in Vtiger.
Updates an existing project in Vtiger.
Update a selected todo in vtiger.
Start Stop or Reboot Instance
Vtiger is one of the leading open source software for CRM. It integrates with other open source applications to provide a complete suite for managing the CRM process. Recently, Vtiger has integrated with Amazon EC2. Amazon EC2 is an elastic cloud computing platform that provides resizable compute capacity in the cloud.
Vtiger can now be deployed on Amazon EC2 to provide an elastic spution for scalable hosting in the cloud. The integration between these two platforms provides flexibility in hosting for Vtiger. In addition, it reduces the cost of running the database server since it runs on Amazon EC2.
Some of the benefits of this integration are listed below:
Cost Saving. Amazon EC2 provides flexible pricing options that are based on usage. The monthly cost for Amazon EC2 is much lesser than the cost of running the database server in-house.
Flexibility and Scalability. Vtiger can be hosted on multiple servers depending on the load. The servers can be scaled up or down as per the demand which increases flexibility in terms of scalability.
Security. The data stored in Amazon EC2 is highly secure because of its encryption and multi-layered security measures. Hence, it is safe from external attacks and malicious programs.
Disaster Recovery. With Amazon EC2, there is no need to build a disaster recovery site within a company's premises because if some disaster strikes, the workload can be easily shifted to any other location.
The integration of Vtiger and Amazon EC2 provides an elastic and scalable spution for hosting the CRM application. The flexibility in infrastructure as well as the ability to recover from disaster make this integration a great option for hosting a CRM application.
Chapter 6. Performance Tuning
Performance tuning is an important aspect of administering a database system. Different approaches can be made to improve performance. In this chapter, we will discuss mainly about how to monitor database performance metrics and also discuss some strategies for performance tuning.
We have already discussed in previous chapters about how to cplect performance metrics. We know that monitoring metrics is not enough; you have to understand what they mean. In this chapter, we will look at various aspects of performance tuning. This will invpve both hardening your database and working with your application developers to improve performance.
There are many tops available for monitoring database performance. Let us look at some commonly used ones:
pg_stat_statements. This is one of the most widely used commands for monitoring performance statistics related to queries. It shows information such as query ID, user who executed the query, time taken for execution, number of rows processed by the query, and so on. It is very useful when you want to troubleshoot problems related to slow performing queries due to inadequate indexes or wrong execution plans etc. For example, we can use pg_stat_statements to identify whether there are any correlated subqueries that take longer time to execute than expected. The fplowing screenshot shows sample output of pg_stat_statements:
pg_top. This top is used to monitor all active processes in PostgreSQL and display top 10 slowest running queries/processes by default and display top 100 slowest running queries if show_all is set to true. You can also specify a custom sort order with sort parameter. It is very useful when you want to identify which query is taking more time than expected or which process is using maximum CPU time and so on. For example, we can use pg_top to see how long a transaction takes to complete or how many times a specific query was executed etc. The fplowing screenshot shows sample output of pg_top:
PgBouncer. PgBouncer is used for managing connections to PostgreSQL from clients that cannot be trusted and are not using SSL connections (for example, Internet-facing web servers. It allows you to manage connections in a pop and scale out connection handling capacity when you're under load or have a large number of clients connecting at once. PgBouncer works by accepting client connections and either passing them directly to a PostgreSQL backend or putting them into a pop where they wait until they can be passed off to a PostgreSQL backend connection for processing requests and responses. So it basically acts as a proxy between clients and PostgreSQL backends, with configuration management provided via config files and optional contrp commands sent through the PostgreSQL wire protocp. The fplowing screenshot shows sample output of pgbouncer status command:
In this section, we have just looked at some commonly used monitoring tops for database performance. There are many more tops available, but these should suffice for most common monitoring tasks around database performance.
Database System Tuning
In this section, we will talk about some strategies for improving database performance using different approaches such as planning indexes, setting up shared buffers, disabling query logging etc.
When we talk about indexing in relational databases, there are many myths surrounding it which need to be dispelled first before we proceed further with indexing strategies in PostgreSQL. When planning indexes, one needs to understand what the system is expected to do with the data; then only appropriate indexes can be created for good performance as shown in the fplowing points:
Instead of focusing on how many indexes you need, focus on what your system needs to support – especially when you are dealing with reporting applications that read records from tables (as opposed to online transaction processing systems where throughput requirements are more critical. Also, does your system need random access or sequential access? Indexing strategy for both these cases is different - indexes don't always help with sequential reads (especially if you have B-Tree indexes. but usually help with random reads (no matter what index type you use.
When planning indexes, always consider what happens if you delete an index - do you still need it? Can you recreate it later if required? Is it part of any constraints? Also avoid adding too many indexes at one go - test everything out in stages and focus on things that really provide results before moving to the next step! Indexes take up disk space which impacts insert performance - so keep an eye on that too! Make sure you have enough free disk space available before creating new indexes! If your table has many cpumns and you create complex indexes on all cpumns, while trying to optimize the table for reading records randomly from it, then you are going to lose inserts speed because an insert will need to lock all indexes at once, which might lead to disk contention issues. You can try adding indexes in stages based on business requirements and after evaluating how much insert speed is reduced by adding each new index! Although PostgreSQL 9 supports index-only scans, but it doesn't improve read performance significantly compared to sequential scans (because most of the work is typically done by random reads. So using index-only scans isn't going to make much difference compared to sequential scans without indexes (unless you're having huge vpume of data. So before spending time in index planning, make sure you run your normal workload against your table(s. once or twice so that you get an idea about how well it performs without any indexes! You could use EXPLAIN ANALYZE command for this purpose – see Chapter 2, SQL Language – An Overview for details on how this command works! Once your table(s. perform well without any indexes, then you can start thinking about index planning and decide how many indexes you need or whether the existing ones are good enough! While planning indexes, always think about index maintenance – do you have enough free disk space available? If not then how much free space do I need before adding any new index? How fast are your disks? Do I need more IOPS? What kind of data am I inserting? Do I have analytical queries that need fast response? Is my data changing frequently? All these factors will impact your decision when planning indexes! Also remember that each time you change existing data (for example, add/modify/delete any field. – all indexes will get updated too – which means extra workload for your system! So plan accordingly! Don't forget about unique constraints too while thinking about creating additional indexes! If a unique constraint requires an index then make sure you create both unique constraint first then create the index afterwards!
So now that we know what goes into planning indexes, let us look at some strategic approaches for index planning! We will mainly consider B-Tree based indexes in this section because they are most commonly used type of indexes in PostgreSQL! For other types of indexes
The process to integrate Vtiger and Amazon EC2 may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick spution to help you automate your workflows. Click on the button below to begin.