Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This patch addresses two cache issues #295

Open
wants to merge 1 commit into
base: branch-0.8
Choose a base branch
from

Conversation

sundeepn
Copy link
Contributor

  1. 'cache tablename' directive overwrites existing memory tables and can cause memory leaks when the cache command is used on an already cached table. caching new partition also causes a memory leak if the parition already exists.
  2. using 'cache' directive on a partitioned table causes table to be corrupt. Only the last cached partition is accessable with the rest being leaked.

1. 'cache tablename' directive overwrites existing memory tables and can cause memory leaks when the cache command is used on an already cached table. caching new partition also causes a memory leak if the parition already exists.
2. using 'cache' directive on a partitioned table causes table to be corrupt. Only the last cached partition is accessable with the rest being leaked.
@sundeepn
Copy link
Contributor Author

@rxin - Please review.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

val tableKey = MemoryMetadataManager.makeTableKey(databaseName, tableName)
// Clear out any existing tables with the same key; prevent memory leak
if (containsTable(databaseName, tableName)) {
logInfo("Attempt to create new table when one already exists - " + tableKey)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be checked by Hive's DDL code. If the problem is caused by CACHE <table> handling, then we should add a check in SharkDDLSemanticAnalyzer that throws an exception if the user tries to CACHE a table already stored at MEMORY level

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a problem when we deal with tables that mutate. i.e. folks using cache on partitioned tables or on tables that get data inserted. That creates a need to call cache a second time. I think we need to support repeated calls to cache without an exception.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could silently ignore repeated cache calls, or return a more descriptive error message saying that the first cache call insures that successive inserts on a MEMORY table will be pinned in memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That feature does not work. Here is what fails for me:

Create external partitioned table
add partition
cache table
add partition

the second partition is not in memory; and when I query the table, I get partial and incorrect results. i.e. I only get back incorrect results from the first partition and no error to the user. Its more than a exception/messaging issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I wasn't too clear above. That issue of corrupted partitioned tables
has been addressed in the PR linked in the comments below.
These inline comments about the MemoryMetadataManager workaround address the
concern that reloading entire tables on repeated CACHE calls is redundant
if the first call works correctly (given the fix for partitioned tables).
On Feb 26, 2014 10:48 AM, "sundeepn" notifications@github.com wrote:

In src/main/scala/shark/memstore2/MemoryMetadataManager.scala:

@@ -54,10 +54,16 @@ class MemoryMetadataManager extends LogHelper {
databaseName: String,
tableName: String,
cacheMode: CacheType.CacheType): MemoryTable = {

  • val tableKey = MemoryMetadataManager.makeTableKey(databaseName, tableName)
  • val newTable = new MemoryTable(databaseName, tableName, cacheMode)
  • _tables.put(tableKey, newTable)
  • newTable
  • val tableKey = MemoryMetadataManager.makeTableKey(databaseName, tableName)
  • // Clear out any existing tables with the same key; prevent memory leak
  • if (containsTable(databaseName, tableName)) {
  •  logInfo("Attempt to create new table when one already exists - " + tableKey)
    

That feature does not work. Here is what fails for me:

Create external partitioned table
add partition
cache table
add partition

the second partition is not in memory; and when I query the table, I get
partial and incorrect results. i.e. I only get back incorrect results from
the first partition and no error to the user. Its more than a
exception/messaging issue.

Reply to this email directly or view it on GitHubhttps://github.com//pull/295/files#r10092827
.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i.c. I have not looked at the 0.9 code yet, but in the 0.8.1, the new partition does not get automatically cached without issueing another 'cache tablename' directive. I agree that we should avoid reload of the entire table for each cache directive.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the commits will have to be back-ported to 0.8.2.

@harveyfeng
Copy link
Member

#2 is actually a bug in SparkLoadTask#loadPartitionedMemoryTable(), I'll submit a fix in a bit

@harveyfeng
Copy link
Member

This should address it: #296

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/Shark-Pull-Request-Builder/12151/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants