The document discusses handling memory accesses for big data workloads. It proposes using an architecture called a "funnel" to more efficiently process "non-temporal" or "read-once" memory accesses that exhibit no data reuse. The funnel would be placed close to data storage to bypass moving all data to DRAM, reducing bandwidth bottlenecks and energy wasted on unnecessary data movement. It provides analytical models showing the funnel can improve performance and energy efficiency by focusing expensive DRAM accesses only on data exhibiting temporal locality. Open questions remain around software models, shared data handling, and hardware implementation of computational capabilities at the funnel.